1496 stories
·
2 followers

We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI

1 Share

About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excels—and the AI detection tool flagged the essay as “18% AI written.” The culprit? Using the word “devoid.” When the word was swapped out for “without,” the score magically dropped to 0%.

The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don’t sound too good, because sounding good is now suspicious.

At the time, I worried this was going to become a much bigger problem. That the fear of AI “cheating” would create a culture that actively punished good writing and pushed students toward mediocrity. I was hoping I’d be wrong about that.

Turns out… I was not wrong.

Dadland Maye, a writing instructor who has taught at many universities, has published a piece in the Chronicle of Higher Education documenting exactly how this has played out across his classrooms—and it’s even worse than what I described. Because the AI detection regime hasn’t just pushed students to write worse. It has actively pushed students who never used AI to start using it.

This fall, a student told me she began using generative AI only after learning that stylistic features such as em dashes were rumored to trigger AI detectors. To protect herself from being flagged, she started running her writing through AI tools to see how it would register.

A student who was writing her own work, with her own words, started using AI tools defensively—not to cheat, but to make sure her own writing wouldn’t be accused of cheating. The tool designed to prevent AI use became the reason she started using AI.

This is the Cobra Effect in its purest form. The British colonial government in India offered a bounty for dead cobras to reduce the cobra population. People started breeding cobras to collect the bounty. When the government scrapped the program, the breeders released their now-worthless cobras, making the problem worse than before. AI detection tools are our cobra bounty. They were supposed to reduce AI use. Instead, they’re incentivizing it.

And this goes well beyond one student’s experience. Maye describes a pattern spreading across his classrooms:

One student, a native English speaker, had long been praised for writing above grade level. This semester, a transfer to a new college brought a new concern. Professors unfamiliar with her work would have no way of knowing that her confident voice had been earned. She turned to Google Gemini with a pointed inquiry about what raises red flags for college instructors. That inquiry opened a door. She learned how prompts shape outputs, when certain sentence patterns attract scrutiny, and ways in which stylistic confidence trigger doubt. The tool became a way to supplement coursework and clarify difficult material. Still, the practice felt wrong. “I feel like I’m cheating,” she told me, although the impulse that led her there had been defensive.

A student praised for years for being an exceptional writer now feels like a cheater because she had to learn how AI detection works in order to protect herself from being falsely accused. The surveillance apparatus has turned writing talent into a liability.

Then there’s this:

After being accused of using AI in a different course, another student came to me. The accusation was unfounded, yet the paper went ungraded. What followed unsettled me. “I feel like I have to stay abreast of the technology that placed me in that situation,” the student said, “so I can protect myself from it.” Protection took the form of immersion. Multiple AI subscriptions. Careful study of how detection works. A fluency in tools the student had never planned to use. The experience ended with a decision. Other professors would not be informed. “I don’t believe they will view me favorably.”

The false accusation resulted in the student subscribing to multiple AI services and studying how the detection systems work. Not because they wanted to cheat, but because they felt they had no other option for self-defense. And then they decided to keep quiet about it, because telling professors about their AI literacy would only invite more suspicion.

Look, I get it: some students are absolutely using AI to cheat, and that’s a real issue educators have to deal with. But the detection-first approach has created an incentive structure that’s almost perfectly backwards. Students who don’t use AI are punished for writing too well. Students who are falsely accused learn that the only defense is to become fluent in the very tools they’re accused of using. And the students savvy enough to actually cheat? They’re the ones best equipped to game the detectors. The tools aren’t catching the cheaters—they’re radicalizing the honest kids.

As Maye explains, this dynamic is especially brutal at open-access institutions like CUNY, where students already face enormous pressures:

At CUNY, many students work 20 to 40 hours a week. Many are multilingual. They encounter a different AI policy in nearly every course. When one professor bans AI entirely and another encourages its use, students learn to stay quiet rather than risk a misstep. The burden of inconsistency falls on them, and it takes a concrete form: time, revision, and self-surveillance. One student described spending hours rephrasing sentences that detectors flagged as AI-generated even though every word was original. “I revise and revise,” the student said. “It takes too much time.”

Just like my kid and the school-provided AI checker, Maye’s student spent a bunch of wasted time “revising” to avoid being flagged.

Students spending hours rewriting their own original work—work that they wrote—because an algorithm decided it sounded too much like a machine. That’s time taken away from studying, working, caring for family, or, you know, actually learning to write better.

Learning to revise is a key part of learning to write. But revisions should be done to serve the intent of the writing. Not to appease a sketchy bot checker.

What Maye articulates so well is that the damage here goes beyond false positives and wasted time. The deeper problem is what these tools teach students about writing:

Detection tools communicate, even when instructors do not, that writing is a performance to be managed rather than a practice to be developed. Students learn that style can count against them, and that fluency invites suspicion.

We are teaching an entire generation of students that the goal of writing is to sound sufficiently unremarkable! Not to express an original thought, develop an argument, find your voice, or communicate with clarity and power—but to produce text bland enough that a statistical model doesn’t flag it.

The word “devoid” is too risky. Em dashes are suspicious. Confident prose is a red flag.

My kid’s Harrison Bergeron experience was, in retrospect, a perfect preview of all of this. Vonnegut warned about a society that forces everyone down to the lowest common denominator by handicapping anyone who shows ability. And here we are, with AI detection tools functioning as the Handicapper General of student writing, punishing fluency, penalizing vocabulary, and training students to sound as mediocre as possible to avoid triggering an algorithm that can’t even tell the difference between a thoughtful essay and a ChatGPT output.

Maye eventually did the only sensible thing: he stopped playing the game.

Midway through the semester, I stopped requiring students to disclose their AI use. My syllabi had asked for transparency, yet the expectation had become incoherent. The boundary between using AI and navigating the internet had blurred beyond recognition. Asking students to document every encounter with the technology would have turned writing into an accounting exercise. I shifted my approach. I told students they could use AI for research and outlining, while drafting had to remain their own. I taught them how to prompt responsibly and how to recognize when a tool began replacing their thinking.

Rather than taking a “guilt-first” approach, he took one that dealt with reality and focused on what would actually be best for the learning environment: teach students to use the tools appropriately, not as a shortcut, and don’t start from a position of suspicion.

The atmosphere in my classroom changed. Students approached me after class to ask how to use these tools well. One wanted to know how to prompt for research without copying output. Another asked how to tell when a summary drifted too far from its source. These conversations were pedagogical in nature. They became possible only after AI use stopped functioning as a disclosure problem and began functioning as a subject of instruction.

Once the surveillance regime was lifted, students could actually learn. They asked genuine questions about how to use tools effectively and ethically. They engaged with the technology as a subject worth understanding rather than a minefield to navigate. The teacher-student relationship shifted from adversarial to educational, which is, you know, kind of the whole point of school.

That line Maye uses: “these conversations were pedagogical in nature” keeps sticking in my brain. The fear of AI undermining teaching made it impossible to teach. Getting past that fear brought back the pedagogy. Incredible.

This piece should be required reading for every educator thinking that “catching” students using AI is the most important thing.

As Maye discovered through painful experience, the answer is to stop treating AI as a policing problem and start treating it as an educational one. Teach students how to write. Teach them how to think critically about AI tools. Teach them when those tools are helpful, when they’re harmful, and when they’re a crutch. And for the love of all that is good, stop deploying detection tools that punish good writers and push everyone toward a bland, algorithmic mean.

We are, quite literally, limiting our students’ writing to satisfy a machine that can’t tell the difference. Vonnegut would have had a field day.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Savage care

1 Share

Surgeons in blue attire performing an operation in a theatre, with medical tools visible.

Neat ethical principles have nothing to say to doctors like me, faced with the brutal, bloody compromises of hospital life

- by Ronald W Dworkin

Read on Aeon

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Advanced Math for Kids: Numbers and Functions Follow The Same Rules

1 Share

Hi Friends,

Happy Friday!

Kids Who Love Math is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Here’s a Friday advanced math treat for your math-loving kid!


I. Numbers have nice rules

You and your elementary or middle school kid might remember some of these nice number rules:

  • When you add two numbers, you get another number:

    • a + b

  • The order in which you add two numbers doesn’t matter:

    • a + b = b + a

  • There is a zero that you can add to the number to get the original number back:

    • a + 0 = a

  • Every number has an opposite:

    • a + -a = 0

Do you remember the mathematical names for these rules? We only listed four. Can you remember the other ones?

Even if you don’t, let’s look at today’s idea about how functions also follow rules that look suspiciously like the ones above for numbers.

II. Functions also follow rules

As your kid learns more math, they’ll eventually learn about functions.

The idea is often introduced as a “Factory” where you put in an “input,” and the factory gives you an “output”. The “F” of factory helps kids start to remember the function notion.

Which means you can show a kid the factory “add two to the input” as:

f(x) = x + 2

If you give me the number 10, the factory adds 2 to the input (10), resulting in 12, then returns 12 as the output.

So f(10) = 10 + 2 = 12 for the factory defined as f(x) = x + 2.

Since you can define factories any way you like, it’s helpful to give them different names, such as “f(x)” and “g(x).”

Let’s do that now. Name one factory f(x) = x + 2 and the other factory g(x) = x * x.

What happens if we add both of our factories?

f(x) + g(x) = x + 2 + x²

Notice that x + 2 + x * x could itself be a factory as well! Let’s name that h(x).

So h(x) = x + 2 + x * x

This means that when we add two functions, we get another function, so we stay within the world of functions.

What if we reorder the addition of our functions?

Does f(x) + g(x) = g(x) + f(x)?

In other words, does x + 2 + x * x = x * x + x + 2?

Well, yes, because if our factory function input is 10, it’s equivalent to asking whether 10 + 2 + 10 * 10 = 10 * 10 + 10 + 2, or 12 + 100 = 100 + 12.

Which means that the order doesn’t matter when we add two functions.

Let’s now define a 0 factory. Regardless of what you enter, it returns 0. We’ll define it as z(x) = 0.

What if we add our f(x) function and the z(x) function?

f(x) + z(x) = x + 2 + 0 = x + 2 = f(x)

This means there is a special function that acts like zero.

Lastly, in keeping with the four rules we listed above, let’s try to define a function that does the opposite.

Remember how every number has an opposite?

For numbers:

3 + (-3) = 0

Functions have opposites, too.

If our function factory is: f(x) = x + 2, then its opposite must be something that, when we add it to f(x), equals zero.

Let’s define that as o(x), with “o” standing for opposite.

f(x) + o(x) = 0

Substituting in for f(x), we get:

x + 2 + o(x) = 0

We can subtract 2 from both sides:

x + o(x) = - 2

We can then subtract x from both sides:

o(x) = - x - 2

Testing whether f(x) + o(x) does equal zero, we can substitute for both

(x + 2) + (-x - 2)

= x - x + 2 - 2

= 0

Note that o(x) is still a function, so we continue to live in the function world.

Based on our small exploration just now, it looks like Numbers and functions follow the same rules.

III. Numbers and Functions appear to have the same structure

Even though numbers and functions look completely different, they appear to obey the same rules.

When different things obey the same rules, mathematicians say they have the same structure.

This is what mathematics starts to be about once you start learning about abstract algebra.

At this point, we can say that the structure we’re looking at has the following rules:

  • You can add two things and stay in the system.

  • Order doesn’t matter.

  • Grouping doesn’t matter.

  • There is a zero.

  • Every element has an opposite.

  • You can multiply by numbers (scalars). As a tiny example, if f(x) = x + 2, then 2 * f(x) = 2 * (x + 2) = 2x + 4, which is still a function.

As you progress further into mathematics, you discover that many other things follow these same rules:

  • Sequences

  • Arrows (vectors in R^2)

  • Polynomials

  • Geometric Vectors

IV. Have a go with your kids, coming up with different functions

Try playing with your kids with different types of functions to see if they get the same thing.

Try f(x) = x^2, as in x squared, and g(x) = 3x. Do the rules still work?

V. The Big Idea

As you get further into mathematics, it stops being about numbers and starts being about noticing when very different things follow the same rules.

This ability to notice deep similarities is one of mathematicians’ superpowers.

Polish Mathematician Stefan Banach (founder of modern functional analysis) communicated this beautifully in the following quote comparing the ladder of mathematical insight between mathematicians:

“A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs, and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.”

VI. Closing

That’s all for today :) For more Kids Who Love Math treats, check out our archives.

Stay Mathy!

Talk soon,
Sebastian

Kids Who Love Math is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

Teacher v Chatbot: My Journey Into the Classroom in the Age Of AI

1 Share


“I’d become a teacher in large part because I wanted to spend time with young people’s writing, honouring it with close attention,” writes Peter C Baker in this piece for The Guardian. But what happens when writing—and even reading—without the help of AI becomes a foreign concept in the classroom? This topic is not new, but Baker’s passion for creating AI-free teaching is inspiring.

Emily’s students all had school-issued laptops, and her computer had a program that allowed her to surveil the content of every one of her students’ screens; they all appeared on the screen simultaneously, in a grid that recalled a bank of CCTV monitors. Using this program was always discomfiting – Big Brother, c’est moi – and always transfixing. Some students didn’t use AI at all, at least in class. Others turned to it every chance they got, feeding in whatever question they were working on almost as a reflex. At least one student was in the habit of putting every new subject into ChatGPT, having it generate notes that he could refer to if called on. Often, I saw students getting funnelled toward AI use even when they hadn’t necessarily been looking for it. I got used to watching a student Google a subject (“key themes in Romeo and Juliet”), read the AI-generated answer that now appears atop most Google search results, click “Dive deeper in AI mode” – and suddenly be chatting with Gemini, Google’s chatbot, which was always ready to advertise its own capabilities. “Should I elaborate on one or more of these themes? Should I draft a first paragraph for an essay on the subject?”

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

Lots of great defecation physics here: “ 66 percent...

1 Share

Lots of great defecation physics here: “66 percent of animals take between 5 and 19 seconds to defecate. It’s a…small range, given that elephant feces have a volume of 20 liters, nearly a thousand times more than a dog’s, at 10 milliliters.”

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

New Strides Made on Deceptively Simple ‘Lonely Runner’ Problem

1 Share

Picture a bizarre training exercise: A group of runners starts jogging around a circular track, with each runner maintaining a unique, constant pace. Will every runner end up “lonely,” or relatively far from everyone else, at least once, no matter their speeds? Mathematicians conjecture that the answer is yes. The “lonely runner” problem might seem simple and inconsequential, but it crops up…

Source



Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete
Next Page of Stories