1498 stories
·
2 followers

Pluralistic: The web is bearable with RSS (07 Mar 2026)

1 Comment and 2 Shares


Today's links



An anatomical drawing of a cross-section of a man's head. The eyeball has been replaced by an RSS logo. To the left of the face is a 'code waterfall' effect as seen in the credit sequences of the Wachowskis' 'Matrix' movie. To the right are clouds of grey roiling clouds, infiltrating the brain as well.

The web is bearable with RSS (permalink)

Never let them tell you that enshittification was a mystery. Enshittification isn't downstream of the "iron laws of economics" or an unrealistic demand by "consumers" to get stuff for free.

Enshittification comes from specific policy choices, made by named individuals, that had the foreseeable and foreseen result of making the web worse:

https://pluralistic.net/2025/10/07/take-it-easy/#but-take-it

Like, there was once a time when an ever-increasing proportion of web users kept tabs on what was going on with RSS. RSS is a simple, powerful way for websites to publish "feeds" of their articles, and for readers to subscribe to those feeds and get notified when something new was posted, and even read that new material right there in your RSS reader tab or app.

RSS is simple and versatile. It's the backbone of podcasts (though Apple and Spotify have done their best to kill it, along with public broadcasters like the BBC, all of whom want you to switch to proprietary apps that spy on you and control you). It's how many automated processes communicate with one another, untouched by human hands. But above all, it's a way to find out when something new has been published on the web.

RSS's liftoff was driven by Google, who released a great RSS reader called "Google Reader" in 2007. Reader was free and reliable, and other RSS readers struggled to compete with it, with the effect that most of us just ended up using Google's product, which made it even harder to launch a competitor.

But in 2013, Google quietly knifed Reader. I've always found the timing suspicious: it came right in the middle of Google's desperate scramble to become Facebook, by means of a product called Google Plus (G+). Famously, Google product managers' bonuses depended on how much G+ engagement they drove, with the effect that every Google product suddenly sprouted G+ buttons that either did something stupid, or something that confusingly duplicated existing functionality (like commenting on Youtube videos).

Google treated G+ as an existential priority, and for good reason. Google was running out of growth potential, having comprehensively conquered Search, and having repeatedly demonstrated that Search was a one-off success, with nearly every other made-in-Google product dying off. What successes Google could claim were far more modest, like Gmail, Google's Hotmail clone. Google augmented its growth by buying other peoples' companies (Blogger, YouTube, Maps, ad-tech, Docs, Android, etc), but its internal initiatives were turkeys.

Eventually, Wall Street was going to conclude that Google had reached the end of its growth period, and Google's shares would fall to a fraction of their value, with a price-to-earnings ratio commensurate with a "mature" company.

Google needed a new growth story, and "Google will conquer Facebook's market" was a pretty good one. After all, investors didn't have to speculate about whether Facebook was profitable, they could just look at Facebook's income statements, which Google proposed to transfer to its own balance sheet. The G+ full-court press was as much a narrative strategy as a business strategy: by tying product managers' bonuses to a metric that demonstrated G+'s rise, Google could convince Wall Street that they had a lot of growth on their horizon.

Of course, tying individual executives' bonuses to making a number go up has a predictably perverse outcome. As Goodhart's law has it, "Any metric becomes a target, and then ceases to be a useful metric." As soon as key decision-makers' personal net worth depending on making the G+ number go up, they crammed G+ everywhere and started to sneak in ways to trigger unintentional G+ sessions. This still happens today – think of how often you accidentally invoke an unbanishable AI feature while using Google's products (and products from rival giant, moribund companies relying on an AI narrative to convince investors that they will continue to grow):

https://pluralistic.net/2025/05/02/kpis-off/#principal-agentic-ai-problem

Like I said, Google Reader died at the peak of Google's scramble to make the G+ number go up. I have a sneaking suspicion that someone at Google realized that Reader's core functionality (helping users discover, share and discuss interesting new web pages) was exactly the kind of thing Google wanted us to use G+ for, and so they killed Reader in a bid to drive us to the stalled-out service they'd bet the company on.

If Google killed Reader in a bid to push users to discover and consume web pages using a proprietary social media service, they succeeded. Unfortunately, the social media service they pushed users into was Facebook – and G+ died shortly thereafter.

For more than a decade, RSS has lain dormant. Many, many websites still emit RSS feeds. It's a default behavior for WordPress sites, for Ghost and Substack sites, for Tumblr and Medium, for Bluesky and Mastodon. You can follow edits to Wikipedia pages by RSS, and also updates to parcels that have been shipped to you through major couriers. Web builders like Jason Kottke continue to surface RSS feeds for elaborate, delightful blogrolls:

https://kottke.org/rolodex/

There are many good RSS readers. I've been paying for Newsblur since 2011, and consider the $36 I send them every year to be a very good investment:

https://newsblur.com/

But RSS continues to be a power user-coded niche, despite the fact that RSS readers are really easy to set up and – crucially – make using the web much easier. Last week, Caroline Crampton (co-editor of The Browser) wrote about her experiences using RSS:

https://www.carolinecrampton.com/the-view-from-rss/

As Crampton points out, much of the web (including some of the cruftiest, most enshittified websites) publish full-text RSS feeds, meaning that you can read their articles right there in your RSS reader, with no ads, no popups, no nag-screens asking you to sign up for a newsletter, verify your age, or submit to their terms of service.

It's almost impossible to overstate how superior RSS is to the median web page. Imagine if the newsletters you followed were rendered with black, clear type on a plain white background (rather than the sadistically infinitesimal, greyed-out type that designers favor thanks to the unkillable urban legend that black type on a white screen causes eye-strain). Imagine reading the web without popups, without ads, without nag screens. Imagine reading the web without interruptors or "keep reading" links.

Now, not every website publishes a fulltext feed. Often, you will just get a teaser, and if you want to read the whole article, you have to click through. I have a few tips for making other websites – even ones like Wired and The Intercept – as easy to read as an RSS reader, at least for Firefox users.

Firefox has a built-in "Reader View" that re-renders the contents of a web-page as black type on a white background. Firefox does some kind of mysterious calculation to determine whether a page can be displayed in Reader View, but you can override this with the Activate Reader View, which adds a Reader View toggle for every page:

https://addons.mozilla.org/en-US/firefox/addon/activate-reader-view/

Lots of websites (like The Guardian) want you to login before you can read them, and even if you pay to subscribe to them, these sites often want you to re-login every time you visit them (especially if you're running a full suite of privacy blockers). You can skip this whole process by simply toggling Reader View as soon as you get the login pop up. On some websites (like The Verge and Wired), you'll only see the first couple paragraphs of the article in Reader View. But if you then hit reload, the whole article loads.

Activate Reader View puts a Reader View toggle on every page, but clicking that toggle sometimes throws up an error message, when the page is so cursed that Firefox can't figure out what part of it is the article. When this happens, you're stuck reading the page in the site's own default (and usually terrible) view. As you scroll down the page, you will often hit pop-ups that try to get you to sign up for a mailing list, agree to terms of service, or do something else you don't want to do. Rather than hunting for the button to close these pop-ups (or agree to objectionable terms of service), you can install "Kill Sticky," a bookmarklet that reaches into the page's layout files and deletes any element that isn't designed to scroll with the rest of the text:

https://github.com/t-mart/kill-sticky

Other websites (like Slashdot and Core77) load computer-destroying Javascript (often as part of an anti-adblock strategy). For these, I use the "Javascript Toggle On and Off" plugin, which lets you create a blacklist of websites that aren't allowed to run any scripts:

https://addons.mozilla.org/en-US/firefox/addon/javascript-toggler/

Some websites (like Yahoo) load so much crap that they defeat all of these countermeasures. For these websites, I use the "Element Blocker" plug-in, which lets you delete parts of the web-page, either for a single session, or permanently:

https://addons.mozilla.org/en-US/firefox/addon/element-blocker/

It's ridiculous that websites put so many barriers up to a pleasant reading experience. A slow-moving avalanche of enshittogenic phenomena got us here. There's corporate enshittification, like Google/Meta's monopolization of ads and Meta/Twitter's crushing of the open web. There's regulatory enshittification, like the EU's failure crack down on companies the pretend that forcing you to click an endless stream of "cookie consent" popups is the same as complying with the GDPR.

Those are real problems, but they don't have to be your problem, at least when you want to read the web. A couple years ago, I wrote a guide to using RSS to improve your web experience, evade lock-in and duck algorithmic recommendation systems:

https://pluralistic.net/2024/10/16/keep-it-really-simple-stupid/#read-receipts-are-you-kidding-me-seriously-fuck-that-noise

Customizing your browser takes this to the next level, disenshittifying many websites – even if they block or restrict RSS. Most of this stuff only applies to desktop browsers, though. Mobile browsers are far more locked down (even mobile Firefox – remember, every iOS browser, including Firefox, is just a re-skinned version of Safari, thanks to Apple's ban rival browser engines). And of course, apps are the worst. An app is just a website skinned in the right kind of IP to make it a crime to improve it in any way:

https://pluralistic.net/2024/05/07/treacherous-computing/#rewilding-the-internet

And even if you do customize your mobile browser (Android Firefox lets you do some of this stuff), many apps (Twitter, Tumblr) open external links in their own browser (usually an in-app Chrome instance) with all the bullshit that entails.

The promise of locked-down mobile platforms was that they were going to "just work," without any of the confusing customization options of desktop OSes. It turns out that taking away those confusing customization options was an invitation to every enshittifier to turn the web into an unreadable, extractive, nagging mess. This was the foreseeable – and foreseen – consequence of a new kind of technology where everything that isn't mandatory is prohibited:

https://memex.craphound.com/2010/04/01/why-i-wont-buy-an-ipad-and-think-you-shouldnt-either/


Hey look at this (permalink)



A shelf of leatherbound history books with a gilt-stamped series title, 'The World's Famous Events.'

Object permanence (permalink)

#25yrsago 200 Eyemodule photos from Disneyland https://craphound.com/030401/

#20yrsago Fourth Amendment luggage tape https://ideas.4brad.com/node/367

#15yrsago Glenn Beck’s syndicator runs a astroturf-on-demand call-in service for radio programs https://web.archive.org/web/20110216081007/http://www.tabletmag.com/life-and-religion/58759/radio-daze/

#15yrsago 20 lies from Scott Walker https://web.archive.org/web/20110308062319/https://filterednews.wordpress.com/2011/03/05/20-lies-and-counting-told-by-gov-walker/

#10yrsago The correlates of Trumpism: early mortality, lack of education, unemployment, offshored jobs https://web.archive.org/web/20160415000000*/https://www.washingtonpost.com/news/wonk/wp/2016/03/04/death-predicts-whether-people-vote-for-donald-trump/

#10yrsago Hacking a phone’s fingerprint sensor in 15 mins with $500 worth of inkjet printer and conductive ink https://web.archive.org/web/20160306194138/http://www.cse.msu.edu/rgroups/biometrics/Publications/Fingerprint/CaoJain_HackingMobilePhonesUsing2DPrintedFingerprint_MSU-CSE-16-2.pdf

#10yrsago Despite media consensus, Bernie Sanders is raising more money, from more people, than any candidate, ever https://web.archive.org/web/20160306110848/https://www.washingtonpost.com/politics/sanders-keeps-raising-money–and-spending-it-a-potential-problem-for-clinton/2016/03/05/a8d6d43c-e2eb-11e5-8d98-4b3d9215ade1_story.html

#10yrsago Calculating US police killings using methodologies from war-crimes trials https://granta.com/violence-in-blue/

#1yrago Brother makes a demon-haunted printer https://pluralistic.net/2025/03/05/printers-devil/#show-me-the-incentives-i-will-show-you-the-outcome

#1yrago Two weak spots in Big Tech economics https://pluralistic.net/2025/03/06/privacy-last/#exceptionally-american


Upcoming appearances (permalink)

A photo of me onstage, giving a speech, pounding the podium.



A screenshot of me at my desk, doing a livecast.

Recent appearances (permalink)



A grid of my books with Will Stahle covers..

Latest books (permalink)



A cardboard book box with the Macmillan logo.

Upcoming books (permalink)

  • "The Reverse-Centaur's Guide to AI," a short book about being a better AI critic, Farrar, Straus and Giroux, June 2026

  • "Enshittification, Why Everything Suddenly Got Worse and What to Do About It" (the graphic novel), Firstsecond, 2026

  • "The Post-American Internet," a geopolitical sequel of sorts to Enshittification, Farrar, Straus and Giroux, 2027

  • "Unauthorized Bread": a middle-grades graphic novel adapted from my novella about refugees, toasters and DRM, FirstSecond, 2027

  • "The Memex Method," Farrar, Straus, Giroux, 2027



Colophon (permalink)

Today's top sources:

Currently writing: "The Post-American Internet," a sequel to "Enshittification," about the better world the rest of us get to have now that Trump has torched America (1012 words today, 45361 total)

  • "The Reverse Centaur's Guide to AI," a short book for Farrar, Straus and Giroux about being an effective AI critic. LEGAL REVIEW AND COPYEDIT COMPLETE.

  • "The Post-American Internet," a short book about internet policy in the age of Trumpism. PLANNING.

  • A Little Brother short story about DIY insulin PLANNING


This work – excluding any serialized fiction – is licensed under a Creative Commons Attribution 4.0 license. That means you can use it any way you like, including commercially, provided that you attribute it to me, Cory Doctorow, and include a link to pluralistic.net.

https://creativecommons.org/licenses/by/4.0/

Quotations and images are not included in this license; they are included either under a limitation or exception to copyright, or on the basis of a separate license. Please exercise caution.


How to get Pluralistic:

Blog (no ads, tracking, or data-collection):

Pluralistic.net

Newsletter (no ads, tracking, or data-collection):

https://pluralistic.net/plura-list

Mastodon (no ads, tracking, or data-collection):

https://mamot.fr/@pluralistic

Bluesky (no ads, possible tracking and data-collection):

https://bsky.app/profile/doctorow.pluralistic.net

Medium (no ads, paywalled):

https://doctorow.medium.com/
https://twitter.com/doctorow

Tumblr (mass-scale, unrestricted, third-party surveillance and advertising):

https://mostlysignssomeportents.tumblr.com/tagged/pluralistic

"When life gives you SARS, you make sarsaparilla" -Joey "Accordion Guy" DeVilla

READ CAREFULLY: By reading this, you agree, on behalf of your employer, to release me from all obligations and waivers arising from any and all NON-NEGOTIATED agreements, licenses, terms-of-service, shrinkwrap, clickwrap, browsewrap, confidentiality, non-disclosure, non-compete and acceptable use policies ("BOGUS AGREEMENTS") that I have entered into with your employer, its partners, licensors, agents and assigns, in perpetuity, without prejudice to my ongoing rights and privileges. You further represent that you have the authority to release me from any BOGUS AGREEMENTS on behalf of your employer.

ISSN: 3066-764X

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete
1 public comment
cjheinz
8 hours ago
reply
Some handy tips.
Reading in NewsBlur, of course.
Lexington, KY; Naples, FL

Language Birth

1 Share

Since 1960, the world has lost hundreds of languages — and gained thousands.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

We’re Training Students To Write Worse To Prove They’re Not Robots, And It’s Pushing Them To Use More AI

1 Share

About a year and a half ago, I wrote about my kid’s experience with an AI checker tool that was pre-installed on a school-issued Chromebook. The assignment had been to write an essay about Kurt Vonnegut’s Harrison Bergeron—a story about a dystopian society that enforces “equality” by handicapping anyone who excels—and the AI detection tool flagged the essay as “18% AI written.” The culprit? Using the word “devoid.” When the word was swapped out for “without,” the score magically dropped to 0%.

The irony of being forced to dumb down an essay about a story warning against the forced suppression of excellence was not lost on me. Or on my kid, who spent a frustrating afternoon removing words and testing sentences one at a time, trying to figure out what invisible tripwire the algorithm had set. The lesson the kid absorbed was clear: write less creatively, use simpler vocabulary, and don’t sound too good, because sounding good is now suspicious.

At the time, I worried this was going to become a much bigger problem. That the fear of AI “cheating” would create a culture that actively punished good writing and pushed students toward mediocrity. I was hoping I’d be wrong about that.

Turns out… I was not wrong.

Dadland Maye, a writing instructor who has taught at many universities, has published a piece in the Chronicle of Higher Education documenting exactly how this has played out across his classrooms—and it’s even worse than what I described. Because the AI detection regime hasn’t just pushed students to write worse. It has actively pushed students who never used AI to start using it.

This fall, a student told me she began using generative AI only after learning that stylistic features such as em dashes were rumored to trigger AI detectors. To protect herself from being flagged, she started running her writing through AI tools to see how it would register.

A student who was writing her own work, with her own words, started using AI tools defensively—not to cheat, but to make sure her own writing wouldn’t be accused of cheating. The tool designed to prevent AI use became the reason she started using AI.

This is the Cobra Effect in its purest form. The British colonial government in India offered a bounty for dead cobras to reduce the cobra population. People started breeding cobras to collect the bounty. When the government scrapped the program, the breeders released their now-worthless cobras, making the problem worse than before. AI detection tools are our cobra bounty. They were supposed to reduce AI use. Instead, they’re incentivizing it.

And this goes well beyond one student’s experience. Maye describes a pattern spreading across his classrooms:

One student, a native English speaker, had long been praised for writing above grade level. This semester, a transfer to a new college brought a new concern. Professors unfamiliar with her work would have no way of knowing that her confident voice had been earned. She turned to Google Gemini with a pointed inquiry about what raises red flags for college instructors. That inquiry opened a door. She learned how prompts shape outputs, when certain sentence patterns attract scrutiny, and ways in which stylistic confidence trigger doubt. The tool became a way to supplement coursework and clarify difficult material. Still, the practice felt wrong. “I feel like I’m cheating,” she told me, although the impulse that led her there had been defensive.

A student praised for years for being an exceptional writer now feels like a cheater because she had to learn how AI detection works in order to protect herself from being falsely accused. The surveillance apparatus has turned writing talent into a liability.

Then there’s this:

After being accused of using AI in a different course, another student came to me. The accusation was unfounded, yet the paper went ungraded. What followed unsettled me. “I feel like I have to stay abreast of the technology that placed me in that situation,” the student said, “so I can protect myself from it.” Protection took the form of immersion. Multiple AI subscriptions. Careful study of how detection works. A fluency in tools the student had never planned to use. The experience ended with a decision. Other professors would not be informed. “I don’t believe they will view me favorably.”

The false accusation resulted in the student subscribing to multiple AI services and studying how the detection systems work. Not because they wanted to cheat, but because they felt they had no other option for self-defense. And then they decided to keep quiet about it, because telling professors about their AI literacy would only invite more suspicion.

Look, I get it: some students are absolutely using AI to cheat, and that’s a real issue educators have to deal with. But the detection-first approach has created an incentive structure that’s almost perfectly backwards. Students who don’t use AI are punished for writing too well. Students who are falsely accused learn that the only defense is to become fluent in the very tools they’re accused of using. And the students savvy enough to actually cheat? They’re the ones best equipped to game the detectors. The tools aren’t catching the cheaters—they’re radicalizing the honest kids.

As Maye explains, this dynamic is especially brutal at open-access institutions like CUNY, where students already face enormous pressures:

At CUNY, many students work 20 to 40 hours a week. Many are multilingual. They encounter a different AI policy in nearly every course. When one professor bans AI entirely and another encourages its use, students learn to stay quiet rather than risk a misstep. The burden of inconsistency falls on them, and it takes a concrete form: time, revision, and self-surveillance. One student described spending hours rephrasing sentences that detectors flagged as AI-generated even though every word was original. “I revise and revise,” the student said. “It takes too much time.”

Just like my kid and the school-provided AI checker, Maye’s student spent a bunch of wasted time “revising” to avoid being flagged.

Students spending hours rewriting their own original work—work that they wrote—because an algorithm decided it sounded too much like a machine. That’s time taken away from studying, working, caring for family, or, you know, actually learning to write better.

Learning to revise is a key part of learning to write. But revisions should be done to serve the intent of the writing. Not to appease a sketchy bot checker.

What Maye articulates so well is that the damage here goes beyond false positives and wasted time. The deeper problem is what these tools teach students about writing:

Detection tools communicate, even when instructors do not, that writing is a performance to be managed rather than a practice to be developed. Students learn that style can count against them, and that fluency invites suspicion.

We are teaching an entire generation of students that the goal of writing is to sound sufficiently unremarkable! Not to express an original thought, develop an argument, find your voice, or communicate with clarity and power—but to produce text bland enough that a statistical model doesn’t flag it.

The word “devoid” is too risky. Em dashes are suspicious. Confident prose is a red flag.

My kid’s Harrison Bergeron experience was, in retrospect, a perfect preview of all of this. Vonnegut warned about a society that forces everyone down to the lowest common denominator by handicapping anyone who shows ability. And here we are, with AI detection tools functioning as the Handicapper General of student writing, punishing fluency, penalizing vocabulary, and training students to sound as mediocre as possible to avoid triggering an algorithm that can’t even tell the difference between a thoughtful essay and a ChatGPT output.

Maye eventually did the only sensible thing: he stopped playing the game.

Midway through the semester, I stopped requiring students to disclose their AI use. My syllabi had asked for transparency, yet the expectation had become incoherent. The boundary between using AI and navigating the internet had blurred beyond recognition. Asking students to document every encounter with the technology would have turned writing into an accounting exercise. I shifted my approach. I told students they could use AI for research and outlining, while drafting had to remain their own. I taught them how to prompt responsibly and how to recognize when a tool began replacing their thinking.

Rather than taking a “guilt-first” approach, he took one that dealt with reality and focused on what would actually be best for the learning environment: teach students to use the tools appropriately, not as a shortcut, and don’t start from a position of suspicion.

The atmosphere in my classroom changed. Students approached me after class to ask how to use these tools well. One wanted to know how to prompt for research without copying output. Another asked how to tell when a summary drifted too far from its source. These conversations were pedagogical in nature. They became possible only after AI use stopped functioning as a disclosure problem and began functioning as a subject of instruction.

Once the surveillance regime was lifted, students could actually learn. They asked genuine questions about how to use tools effectively and ethically. They engaged with the technology as a subject worth understanding rather than a minefield to navigate. The teacher-student relationship shifted from adversarial to educational, which is, you know, kind of the whole point of school.

That line Maye uses: “these conversations were pedagogical in nature” keeps sticking in my brain. The fear of AI undermining teaching made it impossible to teach. Getting past that fear brought back the pedagogy. Incredible.

This piece should be required reading for every educator thinking that “catching” students using AI is the most important thing.

As Maye discovered through painful experience, the answer is to stop treating AI as a policing problem and start treating it as an educational one. Teach students how to write. Teach them how to think critically about AI tools. Teach them when those tools are helpful, when they’re harmful, and when they’re a crutch. And for the love of all that is good, stop deploying detection tools that punish good writers and push everyone toward a bland, algorithmic mean.

We are, quite literally, limiting our students’ writing to satisfy a machine that can’t tell the difference. Vonnegut would have had a field day.

Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete

Savage care

1 Share

Surgeons in blue attire performing an operation in a theatre, with medical tools visible.

Neat ethical principles have nothing to say to doctors like me, faced with the brutal, bloody compromises of hospital life

- by Ronald W Dworkin

Read on Aeon

Read the whole story
mrmarchant
23 hours ago
reply
Share this story
Delete

Advanced Math for Kids: Numbers and Functions Follow The Same Rules

1 Share

Hi Friends,

Happy Friday!

Kids Who Love Math is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Here’s a Friday advanced math treat for your math-loving kid!


I. Numbers have nice rules

You and your elementary or middle school kid might remember some of these nice number rules:

  • When you add two numbers, you get another number:

    • a + b

  • The order in which you add two numbers doesn’t matter:

    • a + b = b + a

  • There is a zero that you can add to the number to get the original number back:

    • a + 0 = a

  • Every number has an opposite:

    • a + -a = 0

Do you remember the mathematical names for these rules? We only listed four. Can you remember the other ones?

Even if you don’t, let’s look at today’s idea about how functions also follow rules that look suspiciously like the ones above for numbers.

II. Functions also follow rules

As your kid learns more math, they’ll eventually learn about functions.

The idea is often introduced as a “Factory” where you put in an “input,” and the factory gives you an “output”. The “F” of factory helps kids start to remember the function notion.

Which means you can show a kid the factory “add two to the input” as:

f(x) = x + 2

If you give me the number 10, the factory adds 2 to the input (10), resulting in 12, then returns 12 as the output.

So f(10) = 10 + 2 = 12 for the factory defined as f(x) = x + 2.

Since you can define factories any way you like, it’s helpful to give them different names, such as “f(x)” and “g(x).”

Let’s do that now. Name one factory f(x) = x + 2 and the other factory g(x) = x * x.

What happens if we add both of our factories?

f(x) + g(x) = x + 2 + x²

Notice that x + 2 + x * x could itself be a factory as well! Let’s name that h(x).

So h(x) = x + 2 + x * x

This means that when we add two functions, we get another function, so we stay within the world of functions.

What if we reorder the addition of our functions?

Does f(x) + g(x) = g(x) + f(x)?

In other words, does x + 2 + x * x = x * x + x + 2?

Well, yes, because if our factory function input is 10, it’s equivalent to asking whether 10 + 2 + 10 * 10 = 10 * 10 + 10 + 2, or 12 + 100 = 100 + 12.

Which means that the order doesn’t matter when we add two functions.

Let’s now define a 0 factory. Regardless of what you enter, it returns 0. We’ll define it as z(x) = 0.

What if we add our f(x) function and the z(x) function?

f(x) + z(x) = x + 2 + 0 = x + 2 = f(x)

This means there is a special function that acts like zero.

Lastly, in keeping with the four rules we listed above, let’s try to define a function that does the opposite.

Remember how every number has an opposite?

For numbers:

3 + (-3) = 0

Functions have opposites, too.

If our function factory is: f(x) = x + 2, then its opposite must be something that, when we add it to f(x), equals zero.

Let’s define that as o(x), with “o” standing for opposite.

f(x) + o(x) = 0

Substituting in for f(x), we get:

x + 2 + o(x) = 0

We can subtract 2 from both sides:

x + o(x) = - 2

We can then subtract x from both sides:

o(x) = - x - 2

Testing whether f(x) + o(x) does equal zero, we can substitute for both

(x + 2) + (-x - 2)

= x - x + 2 - 2

= 0

Note that o(x) is still a function, so we continue to live in the function world.

Based on our small exploration just now, it looks like Numbers and functions follow the same rules.

III. Numbers and Functions appear to have the same structure

Even though numbers and functions look completely different, they appear to obey the same rules.

When different things obey the same rules, mathematicians say they have the same structure.

This is what mathematics starts to be about once you start learning about abstract algebra.

At this point, we can say that the structure we’re looking at has the following rules:

  • You can add two things and stay in the system.

  • Order doesn’t matter.

  • Grouping doesn’t matter.

  • There is a zero.

  • Every element has an opposite.

  • You can multiply by numbers (scalars). As a tiny example, if f(x) = x + 2, then 2 * f(x) = 2 * (x + 2) = 2x + 4, which is still a function.

As you progress further into mathematics, you discover that many other things follow these same rules:

  • Sequences

  • Arrows (vectors in R^2)

  • Polynomials

  • Geometric Vectors

IV. Have a go with your kids, coming up with different functions

Try playing with your kids with different types of functions to see if they get the same thing.

Try f(x) = x^2, as in x squared, and g(x) = 3x. Do the rules still work?

V. The Big Idea

As you get further into mathematics, it stops being about numbers and starts being about noticing when very different things follow the same rules.

This ability to notice deep similarities is one of mathematicians’ superpowers.

Polish Mathematician Stefan Banach (founder of modern functional analysis) communicated this beautifully in the following quote comparing the ladder of mathematical insight between mathematicians:

“A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs, and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies.”

VI. Closing

That’s all for today :) For more Kids Who Love Math treats, check out our archives.

Stay Mathy!

Talk soon,
Sebastian

Kids Who Love Math is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Teacher v Chatbot: My Journey Into the Classroom in the Age Of AI

1 Share


“I’d become a teacher in large part because I wanted to spend time with young people’s writing, honouring it with close attention,” writes Peter C Baker in this piece for The Guardian. But what happens when writing—and even reading—without the help of AI becomes a foreign concept in the classroom? This topic is not new, but Baker’s passion for creating AI-free teaching is inspiring.

Emily’s students all had school-issued laptops, and her computer had a program that allowed her to surveil the content of every one of her students’ screens; they all appeared on the screen simultaneously, in a grid that recalled a bank of CCTV monitors. Using this program was always discomfiting – Big Brother, c’est moi – and always transfixing. Some students didn’t use AI at all, at least in class. Others turned to it every chance they got, feeding in whatever question they were working on almost as a reflex. At least one student was in the habit of putting every new subject into ChatGPT, having it generate notes that he could refer to if called on. Often, I saw students getting funnelled toward AI use even when they hadn’t necessarily been looking for it. I got used to watching a student Google a subject (“key themes in Romeo and Juliet”), read the AI-generated answer that now appears atop most Google search results, click “Dive deeper in AI mode” – and suddenly be chatting with Gemini, Google’s chatbot, which was always ready to advertise its own capabilities. “Should I elaborate on one or more of these themes? Should I draft a first paragraph for an essay on the subject?”

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories