1515 stories
·
2 followers

Sorting algorithms

1 Share

Sorting algorithms

Today in animated explanations built using Claude: I've always been a fan of animated demonstrations of sorting algorithms so I decided to spin some up on my phone using Claude Artifacts, then added Python's timsort algorithm, then a feature to run them all at once. Here's the full sequence of prompts:

Interactive animated demos of the most common sorting algorithms

This gave me bubble sort, selection sort, insertion sort, merge sort, quick sort, and heap sort.

Add timsort, look up details in a clone of python/cpython from GitHub

Let's add Python's Timsort! Regular Claude chat can clone repos from GitHub these days. In the transcript you can see it clone the repo and then consult Objects/listsort.txt and Objects/listobject.c. (I should note that when I asked GPT-5.4 Thinking to review Claude's implementation it picked holes in it and said the code "is a simplified, Timsort-inspired adaptive mergesort".)

I don't like the dark color scheme on the buttons, do better

Also add a "run all" button which shows smaller animated charts for every algorithm at once in a grid and runs them all at the same time

It came up with a color scheme I liked better, "do better" is a fun prompt, and now the "Run all" button produces this effect:

Animated sorting algorithm race visualization titled "All algorithms racing" with controls for SIZE (50) and SPEED (100), Stop and Shuffle buttons, and a "Back to single" button. A legend shows Comparing (pink), Swapping (orange), Pivot (red), and Sorted (purple) indicators. Seven algorithms race simultaneously in card panels: Bubble sort (Sorting… — Comparisons: 312, Swaps: 250), Selection sort (Sorting… — Comparisons: 550, Swaps: 12), Insertion sort (Sorting… — Comparisons: 295, Swaps: 266), Merge sort (#3 — Comparisons: 225, Swaps: 225), Quick sort (#2 — Comparisons: 212, Swaps: 103), Heap sort (Sorting… — Comparisons: 358, Swaps: 203), and Timsort (#1 — Comparisons: 215, Swaps: 332). Finished algorithms (Timsort, Quick sort, Merge sort) display fully sorted purple bar charts and are highlighted with purple borders.

Tags: algorithms, computer-science, javascript, sorting, ai, explorables, generative-ai, llms, claude, vibe-coding

Read the whole story
mrmarchant
58 minutes ago
reply
Share this story
Delete

Lessons Learned from Using Technology Devices to Improve Teaching and Schooling

1 Share

A number of years ago, I was a member of a panel held at Mission High School in San Francisco. Software developers and designers of ed-tech products attended this panel discussion. The moderator asked each of us to state in 8 minutes “what hard lessons have you learned about education that you’d like to share with the ed-tech design community?”

My fellow panelists were two math teachers–one from Mission High School and the other a former teacher at Oakland High School, three product designers (one for the Chan/Zuckerberg Initiative, another for Desmos, and the lead designer for Khan Academy) who had been working in the ed-tech industry for years. In attendance were nearly 60 young (in their 20s and 30s) product designers, teachers, and ed-tech advocates .

Elizabeth Lin, then a designer for Khan Academy, had organized and moderated the panel. She began with a Kahoot quiz on Pokemon and Harry Potter. For the record, I knew none of the answers. I had never played Pokemon or, as of that date, had cracked a Harry Potter book. 

When my turn came to speak, I looked around the room and saw immediately that I was the oldest person in the room. Here is what I said.

Many designers and school reformers believe that in old age, pessimism and cynicism go together. Not true.

As someone who has taught high school history for many years, led a school district, and researched the history of school reform including the use of new technologies in classrooms over the past half-century, I am an oldster. But I am neither a pessimist, nay-sayer, or cynic about improving public schools and teachers making changes to help students learn. I am a tempered idealist who is cautiously optimistic about what U.S. public schools have done and still can do for children, the community, and the nation. Both my tempered idealism and cautious optimism have a lot to do with what I have learned over the decades about school reform especially when it comes to technology. So here I offer a few lessons drawn from nearly a half-century of experiences in public schools.

LESSON 1: Teachers are central to all classroom learning

I have learned that no piece of software, portfolio of apps, or learning management system can replace teachers simply because teaching is a helping profession like medicine and psychotherapy. Helping professions are completely dependent upon interactions with patients, clients, and students for success. No improvement in physical or mental health or learning can occur without the active participation of the patient and client—and of course, the student.

Now, all of these helping professions have had new technologies applied to them. But if you believe, as I do, that teaching is anchored in a relationship between an adult and a student then relationships cannot be replaced by even the most well designed software, efficient device, or virtual reality. There is something else that software designers often ignore or forget. That is that teachers make policy every time they enter their classroom and teach.

Once she closes her classroom door, the teacher decides what the lesson is going to be, what parts of top-down policies she will put into practice in the next hour, and which parts of a new software program she will use, if at all.

Designers are supposed to have empathy for users, that is, understand emotionally what it is like to teach a crowd of students five or more hours a day and know that teacher decisions determine what content and skills enter the classroom that day. Astute ed-tech designers understand that, for learning to occur, teachers must gain student trust and respect. Thus, teachers are not technicians who mechanically follow software directions. Teaching and learning occur because of the teacher’s expertise, smart use of high-tech tools, and the creation of a classroom culture for learning that students come to trust, respect and admire.

Of course, there are a lot of things about teaching that can be automated. Administrative stuff—like attendance and grade books—can be replaced with apps. Reading and math skills and subject area content can be learned online but thinking, problem solving, and decision-making where it involves other people, collaboration, and interactions with teachers, software programs cannot replace teachers. That’s a rosy scenario that borders on fantasy.

LESSON 2: Access to digital tools is not the same as what happens in daily classroom activities.

In 1984, there were 125 students for each computer; now the ratio is 1:1. Because access to new technologies has spread across the nation’s school districts, too many pundits and promoters leap to the conclusion that all teachers integrate these digital tools into daily practice similarly and seamlessly. While surely the use of devices and software has gained entry into classrooms, anyone who regularly visits classrooms much variation among teachers using digital technologies.

Yes, most teachers have incorporated digital tools into daily practice but even those who have thoroughly integrated new technologies into their lessons reveal both change and stability in their teaching.

In 2016, I visited 41 elementary and secondary teachers in Silicon Valley who had a reputation for integrating technology into their daily lessons.

They were hard working, sharp teachers who used digital tools as easily as paper and pencil. Devices and software were in the background, not foreground. The lessons they taught were expertly arranged with a variety of student activities. These teachers had, indeed, made changes in creating playlists for students, pursuing problem-based units, and organizing the administrative tasks of teaching.

But I saw no fundamental or startling changes in the usual flow of a lesson. Teachers set lesson goals, designed varied activities, elicited student participation, varied their grouping of students, and assessed student understanding. None of that differed from earlier generations of experienced teachers. The many lessons I observed were teacher-directed and revealed continuity in how teachers have taught for decades. Again, both stability and change marked teaching with digital tools.

LESSON 3: Designers and entrepreneurs often overestimate their product’s power to make change and just as often underestimate the power of organizations to keep things as they are.

Consider the age-graded school. The age-graded school (e.g., K-5, K-8, 6-8, 9-12) solved the 20th century problem of how to provide an efficient schooling to move

masses of children through public schools.  Today, it is the dominant form of school organization.

Most Americans have gone to kindergarten at age 5, studied Egyptian mummies in the 6th grade, took algebra in the 8th or 9th grade and then left 12th grade with a diploma around age 18.

The age-graded school was an organizational innovation designed to replace the one-room schoolhouse in the mid-19th century—yes, I said 19th century or almost 200 years ago. That design shaped (and continues to shape) how teachers teach and students learn.

As an organization, the age-graded school distributes children and youth by age to school “grades. It sends teachers into separate classrooms and prescribes a curriculum carved up into 36-week chunks for each grade. Teachers and students cover each chunk assuming that all children will move uniformly through the 36-weeks, and, after passing tests would be promoted to the next grade.

Now, the age-graded school dominates how public (and private) schools are organized. Even charter schools unbeholden to district rules for how to organize a school, have teachers teach, and students learn are age-graded as is the brand new public high school on the Oracle campus called Design Tech High School.

LESSON 4: Ed Tech designers are trapped in a trilemma of their own making.

Three highly prized values clash. One is the desire for profit—building a product that schools buy and use. Another is to help teachers, students, and schools become more efficient and effective. And the third value is the strong belief that technology can solve educational problems.

Many venture capitalists, founders of start-ups, and designers of products–call them cheerleaders for high tech innovations– cherish these conflicting values.

I’m not critical of these values. But when it comes to schools, product designers with these values in their search for profit and improvement under-estimate both the complexity of daily teaching and the influence of age-graded schools on teaching and learning. Those who see devices and software transforming today’s classrooms more often than not over-estimate the power of their product while ignoring the influence of school structures’s influence upon what occurs between teachers and students.

I don’t believe that there are technical solutions to teaching, to running a school, or governing a district. Education is far too complex.

These are a few of the “hard” lessons that I have learned.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

In Praise of Stupid Questions

1 Share

Ask a silly question, get a silly answer. — Tom Lehrer, “New Math”

I ask too many questions. A case in point is the time I lost out on a place I wanted to rent when I asked my potential future landlord one question too many (“Does the pond have mosquitos in the summer?”). Another example is the time I ticked off a car salesman by asking him, after a long string of similar requests that he had gamely complied with, whether I could try changing the tires of a car I was considering buying from his dealership before I bought it. (I mean, shouldn’t every responsible consumer go through the entire owner’s manual when contemplating such a major purchase?) As a member of a local singing group, I developed such a reputation for asking the music director questions that when we went on a tour, one of my fellow singers got a laugh by interrupting the tour-guide with my signature line: “I have a question!”

But my topic today isn’t questions in general. I want to focus on the species of question that I often tell students doesn’t exist: the Stupid Question. And I want to talk about how one stupid question led me to an interesting and new (albeit a bit stupid) way to estimate the mathematical constant pi.

I tell my students “There are no stupid questions” because I want them to feel free to ask me things in the classroom without fear of ridicule from their peers, and without the kind of internalized shame that can disconnect students from their math abilities. But I’m lying, or at least over-simplifying, when I tell my students that stupid questions don’t exist; they (and you) all know that some questions are better than others. Some questions are based on incorrect assumptions, or are ambiguous, or are even meaningless. Back when I was in high school, one of my teachers complained I asked too many meaningless questions. You may be inclined to give my younger self the benefit of the doubt and to guess that the teacher wasn’t equipped to understand my questions, but I don’t think so; the class in question was part of an advanced six-week summer program called the Hampshire College Summer Studies in Mathematics program, and the teacher in question had a Ph.D. I don’t remember what questions I asked in that class, but I’m sure some of them were obscure, confused, or yes, meaningless.

I still sometimes ask questions that don’t make literal sense. And that’s okay, because I find that in my research, murky questions can be stepping stones on the path of learning. Sometimes it’s even where new math comes from. There’s an old saying “Ask a silly question, get a silly answer,” but I say: Scratch a silly question and you might find a better one struggling to get out.

One issue for me is how much scratching I have to do before sharing a question with others, and sometimes I annoy people by not doing enough scratching in advance. Often it’s because I don’t take enough time to think about my audience, and that’s my bad, but sometimes it’s the classic problem in communicating ideas: you don’t quite know how to share an idea with people-who-aren’t-you because you aren’t a person-who-isn’t-you.

Fortunately these days I have a very patient interlocutor named ChatGPT blessed with an infinite tolerance for half-baked questions and a soothing lack of judgmentality. The Greek philosopher Epictetus said “If you want to improve, be content to be thought foolish and stupid,” but the problem with putting this into action has always been that, while most people want to improve, nobody wants to reveal their ignorance. How lucky we are, twenty centuries after Epictetus, that we can hide our ignorance from our fellow humans and reveal it only to our creations!

Over the past few years, more and more of my research has benefited from conversations with ChatGPT, despite the occasional stupidity of my questions, and there’s no better example of this than my recent discovery of new way to think about the number π/4.

PT, CHATGPT, AND PROBABILITY

One day last November I was at my local gym doing physical therapy, an activity I find tiresome because my regimen requires just enough of my mind to make it impossible for me to do any prolonged thinking. But a boring exercise routine is well-suited to holding hour-long conversations with an interlocutor that doesn’t usually respond right away. So for instance I can do a set of leg-lifts, do some mathematical day-dreaming during my pause between sets, do another set, dictate some mathematical thoughts to ChatGPT, do another set, and then see what ChatGPT came up with. “PT plus AI” takes more time than plain old PT, but it makes the time pass more in a more interesting way, with regular infusions of suspense.

I like random processes, so I thought I’d learn something new in that area by picking a topic in the theory of probability, coming up with the simplest question on that topic that I didn’t know the answer to, and then asking it. The topic I chose was the tension between two facts well-known to probabilists: (1) if you repeatedly toss a coin, you can be certain that after some finite amount of time, the total number of tosses that came up heads will exceed the total number tosses that came up tails, but (2) the amount of time it takes for this to happen, while always finite, is infinite on average.1

If that sounds like nonsense, it’s because you’re used to the world of random variables with thin tails and finite expected value, and unfamiliar with the strange world of random variables with fat tails and infinite expected value.2 Maybe someday I’ll write a Mathematical Enchantments essay about the paradoxes of fat-tailed random variables, but today I’m writing about questions, and “What is the expected amount of time it takes until the number of heads exceeds the number of tails?” wasn’t the question that I asked on that November day, because I already knew the answer to that one: infinity. I wanted to learn something new about this story, so I asked ChatGPT:

Toss a fair coin until the number of heads exceeds the number of tails. This determines a stopping time. What is the probability that this stopping time is even?

Speaking of stopping times, this is a good time for you to stop reading and do something I should’ve done but failed to do: play around with the question on your own for a minute or so to get a feeling for what’s being asked.

Do you see what’s wrong with my question?

.

.

.

It’s not hard to show that the number of tosses required until the number of heads first exceeds the number of tails is always odd, so the probability of the stopping time being even is zero! You might say my question is a possibility (or impossibility) question masquerading as a probability question.3

To see why it’s always an odd number, let Hn and Tn represent the number of heads and the number of tails respectively in the first n tosses, so that Hn + Tn = n. The rule “Stop when Hn > Tn for the first time” is equivalent to the rule “Stop when HnTn is positive for the first time”, but HnTn is always a whole number, and it always changes by ±1 when you toss (+1 each time you toss heads,−1 each time you toss tails), so an equivalent rule is “Stop when HnTn = 1 for the first time.” Adding the equations Hn + Tn = n and HnTn = 1 gives 2Hn = n+1. Since 2Hn is even, n+1 must be even, so n must be odd when Hn > Tn for the first time.

Even without doing the algebra, I could’ve caught my mistake if I’d just done a few examples. Then I would’ve seen that the stopping time can be 1, 3, 5, 7, etc. but never 2, 4, 6, 8, etc.

A human interlocutor pointing out my oversight, if they were in a bad mood, might have said “Why are you bothering me with questions you clearly don’t really care about, because if you did care, you would’ve spent TEN LOUSY SECONDS thinking about the meaning of what you’re asking? Is my time really worth less than yours? You may enjoy asking questions for its own sake, but I like thinking about GOOD questions, so: That’s it. I’m not answering any more of your questions. You and I are DONE.”

Luckily for me, ChatGPT is always eager—perhaps too eager—to keep conversations going. In this case it said:

There’s an ambiguity in the statement that we should resolve before computing anything:

and offered two interpretations it called A and B, asking me to pick one. Nary a word of reproach for asking a question that had two or three different interpretations and was trivial under all of them.

NOT STOPPING

How do you follow up on a blunder like that? I could’ve decided to stop conversing with ChatGPT and listen to a podcast instead, or maybe go into a different line of work, but I thought that if I asked a better question, I’d learn something new and non-trivial.

I suppose I could’ve asked “What is the probability that the stopping time is a multiple of 3?” and I would’ve learned something new to me; according to ChatGPT (when I asked it last week, not last November), the answer is exactly (1+√3−√2)/6. That’s kind of fun, and it leads someplace interesting, but it isn’t what I asked back in late 2025.

What I actually did was confirm that I want the coin-tossing process to stop when the number of heads first strictly exceeds the number of tails, and then I asked: if we stop tossing as soon as the proportion of heads becomes bigger than 1/2, what is the expected value of that proportion?

ChatGPT thought for just under 3 minutes and answered:

Nice follow-up. 🙂 Under interpretation B (stop the first time Hn >Tn), the expected proportion of heads at the stopping time turns out to be a very pretty constant: E[Hτ/τ]=π/4. Here’s a brisk derivation.

If you’d like, we can also look at higher moments (e.g. the variance of that proportion) or generalize to a biased coin.

Three features of ChatGPT’s response deserve attention. One is its use of friendliness and flattery; another is its tendency to anticipate a human’s next question and answer it; and a third is its habit of suggesting ways to keep the conversation going. Since November I’ve admonished ChatGPT to keep its responses businesslike and not to go down avenues I haven’t asked it to. I recognize the addictive potential of parasocial agents like ChatGPT and I’m determined to avoid it.

But the main thing that struck me about its response on that day was that the answer π/4 is, just as ChatGPT said, very pretty—indeed, suspiciously pretty. (Compare π/4 with the (1 + √3− √2)/6 I mentioned above.) I wondered: if the answer is so pretty, wouldn’t I have already heard about it?

I asked ChatGPT to check the published literature on random walk theory4 (a branch of math that features many problems like this), and it said that, while related formulas existed in books and journal articles, nobody had actually asked my question before, as far as it could tell.

I went ahead and read ChatGPT’s derivation of the answer π/4, and I couldn’t find any mistakes; it was a solid by-the-book argument that employed a method I’ve used myself, and have even taught to students in the past. It was the kind of thing that I could’ve done in an afternoon, but not in three minutes.

So I dug deeper and started showing the result to people, sometimes revealing the formula and sometimes asking “What’s the expected value?”, and while many people solved it or had ChatGPT solve it for them, nobody could recall having seen it before.

That’s when I realized I had something worth sharing, even if it was just a morsel and not a mathematical meal, and I wrote it up for publication and sent it to the American Mathematical Monthly.5

YOU CAN’T SPELL “STUPID” WITHOUT “PI”

Approximating pi to a few decimal places is a pointless thing to do, since we humans already know pi to gazillions of digits, and since only the first dozen or so are meaningful in the real world. But I say: if you’re going to do something pointless, you might as well do it in a fun way.

The usual way to waste one’s time approximating pi is called the Buffon needle experiment, invented by the 18th century French scientist Georges-Louis Leclerc, Comte de Buffon, who founded the branch of mathematics called geometric probability theory. Leclerc showed that if you drop a needle of length L on a floor that’s divided into slats of width L, then the probability that the needle lies across a line separating two strips is 2/π. Later the Swiss astronomer Rudolph Wolf did an empirical test of Leclerc’s theorem by performing 5000 trials, 3175 of which resulted in the needle crossing a line, yielding the decent estimate π≈3.1596.

The occurrence of pi in Leclerc’s formula is not mysterious; it has its roots in the fact that the underlying probability distribution on the orientation of the needle must be rotationally symmetric, and once you start rotating things, pi has a natural tendency to pop up. The occurrence of pi in my coin-tossing formula has more obscure roots, and if you’re hoping I’ll provide an intuitive explanation, you’re out of luck. The baffling way pi creeps into statistics is the basis of an anecdote that appears at the start of Eugene Wigner’s famous essay The Unreasonable Effectiveness of Mathematics in the Natural Sciences:

There is a story about two friends, who were classmates in high school, talking about their jobs. One of them became a statistician and was working on population trends. He showed a reprint to his former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician explained to his former classmate the meaning of the symbols for the actual population, for the average population, and so on. His classmate was a bit incredulous and was not quite sure whether the statistician was pulling his leg. “How can you know that?” was his query. “And what is this symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the classmate, “surely the population has nothing to do with the circumference of the circle.”

But even if the reason pi occurs is hard to explain, occur it does, and that means one could use the coin-toss procedure to estimate π by way of π/4, just as Leclerc’s experiment estimates π by way of 2/π. Hand out ten coins to ten students, have each student toss their coin until the number of heads they’ve seen is bigger than the number of tails, and have them record the fraction of their tosses that showed heads. If you average all the students’ fractions, you should get a rough approximation to π/4, right?

Mathematically, yes; practically, not really. The problem is that there’s a fifty percent chance that one of the ten students will have to do more than a hundred tosses and might give up in frustration. Increasing the number of students makes this problem only worse: if you tried this activity with a hundred students, say, there’s a good chance that one of them would have to do over ten thousand tosses. And who’d be willing to toss a coin that many times?

Mathematician and YouTuber Matt Parker would, and he did. And then he wondered what to do with all those coin-flips.

One of Parker’s passions is computing pi, as he discussed in his recent Gathering 4 Gardner talk “Update on Ridiculous Calculations of Pi” (I’ll post a link to the video of his talk when it becomes available). So when he heard about my new way of estimating pi, he had the idea of using his 10,000 coin flips to simulate a classroom in which the first student does the experiment using the first segment of Parker’s sequence, the second student uses the coin flips that come right after the flips the first student used, the third student uses the coin flips that come right after the flips the second student used, and so forth, until some unlucky student runs out of flips from Parker’s sequence. As things turned out, this unlucky student was the 63rd in the imaginary class, so Parker’s experimental estimate of pi/4 via coin-tossing ended up averaging just 62 fractions.

Here’s Matt Parker’s new video in which he tells his story (or here it will be, when it drops):

VISIT THIS PAGE AGAIN TOMORROW!

The resulting estimate of pi—around 3.2— gives us pi to only one decimal place, and hence may set a record for minimal bang per maximal buck, where “bang” means precision and “buck” means effort. But this pathetic performance is pretty much what the theory of probability predicts, namely, that if you want the first N digits of pi you’ll need to perform 104N coin-tosses. Parker’s experiment shows us this performance-level in the case N=1.

So, my method of estimating pi is a really bad way to get anything better than π ≈ 3. Leclerc’s needles are a bit better—10,000 needles should give you two digits of pi—but leaving aside the accuracy issue, I’d rather spend ten minutes tossing a coin than spend ten minutes dropping needles, especially since someone is going to have to pick up all those needles, and I guess it’s going to have to be me. And I might cut myself on one of them! Perhaps I should ballyhoo my approach to estimating pi as a contribution to the cause of Pi Day safety, and point out that my approach is free from of well-known hazards of shared needles.

THE MORAL(S)

I suppose that, on a practical level, a take-home for the practicing mathematician is that if you use ChatGPT, don’t trust it to generate valid proofs, and even when it finds a valid proof, don’t be so sure it’s a good proof. And whatever you do, don’t have ChatGPT create a bibliography for you.

But I think a deeper lesson is about the value of stupid questions. The way to find things out is to ask a lot of questions. Ask enough questions, and you’re likely to find a new answer: new to you, and once in a while, new to others. On the other hand, if you ask a whole lot of questions, some of them will be stupid. And that’s okay! We teachers need to be patient with our students, even if no teacher can ever hope to be as unfailingly patient as a Large Language Model.

The relationship between bad questions and good questions reminds me of an old story about brainstorming told by David Black in his essay Being Creative With a Bear and Honey. A team was trying to figure out a good way to get snow off power lines in winter, and someone facetiously suggested that they put put pots of honey at the tops of the posts so that the local bears, climbing up the posts to get at the honey, would shake the posts and cause the power lines between them to shed their snow. Instead of abandoning the absurd idea, the brainstorming team discussed how you’d need to use helicopters when placing the honey pots atop the posts, and only later did someone point out that the downwash from the helicopter blades would do the job of getting the snow off the power lines—the bears could stay home. This is one of my favorite examples of how a bad idea can lead to a good idea, as long as you don’t stop.

Likewise, if you’ve got some sort of vague itch that causes you to ask a stupid question, don’t neglect the itch just because its initial expression was stupid. Follow that itch, and scratch that question! You may end up with a much better question.5

To join the Hacker News discussion of this article, visit
https://news.ycombinator.com/item?id=47356740

ENDNOTES

#1: It’s instructive to compare a protocol that has infinite expected stopping time with a protocol that has finite expected stopping time. An example of a protocol that has finite expected stopping time is “Toss until the first time the coin comes up heads”; you can write the expected stopping time as

(1/2) (1 toss) + (1/4) (2 tosses) + (1/8) (3 tosses) + (1/16) (4 tosses) + …,

because half of the time you stop after one toss, a quarter of the time you stop after two tosses, an eighth of the time you stop after three tosses, and so on; and

(1/2)(1) + (1/4)(2) + (1/8)(3) + (1/16)(3)+ … = 2,

so on average the stopping time is 2. In contrast, we’ve been dealing with the protocol “Toss until the number of heads exceeds the number of tails”; in this case you can write the expected stopping time as

(1/2) (1 toss) + (1/8) (3 tosses) + (2/32) (5 tosses) + (5/128) (7 tosses) + …

Although the terms are getting smaller, they don’t get small very quickly, and the infinite sum diverges.

#2: This is the world of the St. Petersburg paradox (described in Jordan Ellenberg’s book How Not to Be Wrong) and the peculiar properties of the double-down-until-you-win gambling strategy (also called martingale betting).

#3: What I actually asked ChatGPT (though without the added emphasis) was: “Toss a fair coin until the number of heads equals or exceeds the number of tails. This determines a stopping time. What is the probability that this stopping time is even?” The insertion of “equals or” doesn’t impact the triviality of the question, but it changes the answer: now, instead of being always odd, the stopping time is always even! What’s more, the question is ambiguous: am I allowed to stop before I toss the coin at all, since at that moment the number of heads (zero) equals-or-exceeds the number of tails (also zero)? ChatGPT pointed out the ambiguity, and also pointed out that under both interpretations, the probability that I stop after an even number of tosses is 100%, aka 1.

#4. To see the connection between coin-tossing and random walk, imagine a drunkard walking along an east-west residential street who, whenever he arrives in front of a house, either proceeds to the next house to the east or the next house to the west, apparently choosing at random. The mathematics of the drunkard’s walk is identical to the mathematics of tossing coins, where the position of the drunkard at time n (assuming he starts at “house 0” at time 0) corresponds to the difference HnTn. Every time you toss a coin, the cumulative number of heads minus the cumulative number of tail either goes up by 1 (when the coin comes up heads) or goes down by 1 (when the coin comes up tails), and there’s no way to predict which way it will go—just as there’s no way to predict, when the drunkard is at a particular house, whether his next stop will be the house to its east or the house to its west.

#5: Here’s the part of the story I’m embarrassed about: instead of trying to find my own derivation, I went ahead and used ChatGPT’s derivation, lightly edited by me. This kept me from noticing that ChatGPT’s proof, while correct, was needlessly complicated. Fortunately other people found the proof that I suspect is the sweetest possible proof, or what Paul Erdős would have called the “proof from The Book” (for more about The Book, see my essays What Proof is Best? and Chess with the Devil); this short and sweet proof is the one that I give in the revised version of my write-up. I also trusted, and initially included, ChatGPT’s list of purportedly relevant references, most of which turned out to be either irrelevant or nonexistent. I won’t make that mistake again.

#6: An example of a truly excellent question—not mine, I hasten to say—is the question one of the referees for my submission to the Monthly proposed: “Why not also a short comment at least on the effect of a surplus of 2, for instance?” That is, what if we toss the coin even longer, and only stop when the number of heads is equal to 2 more than the number of tails? It turns out that for this modified version of my question, the expected proportion of heads at the stopping time is a different nice number: the natural logarithm of 2, aka ln 2! Better yet, if you stop tossing coins when the number of heads is equal to m more than the number of tails, for some arbitrary positive integer m, then it appears that whenever m is odd, the expected proportion of heads at the stopping time is of the form a + b π with a,b rational, and that whenever m is even, the expected proportion of heads at the stopping time is of the form a + b ln 2 with a,b rational. Or so ChatGPT tells me. (To be fair, it gives proofs; I just haven’t had time to read them.) ChatGPT also says that, if instead of looking at the ratio of heads-to-tosses, we look the ratio of tails-to-heads, we get expected value 1 – ln 2. If instead we look at the ratio of heads-to-tails, we encounter the troublesome ratio 1/0 in the case where our first toss is heads, but: if we condition on the event that the first toss is tails, then the conditional expected value of the ratio of heads to tails is ln 2. Saith ChatGPT.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

"If AI is writing the work and AI is reading the work, do we even need to be there at all?" Education workers reveal a growing crisis on campus and off

1 Share

Few spheres of public life have been more rapidly and thoroughly transformed by generative AI products than education, and few professions have been more dramatically upended than teachers and education workers. There’s a case to be made that the first major social transformation of the modern AI era was the mass diffusion of ChatGPT into classrooms, where students took to using it as an easy implement for cheating on homework.

This mass plagiarization crisis has only deepened and complicated since, leaving educators, administrators, and students to grapple with how to construct, enact, and enforce AI policies at school. I should know: My partner is a professor at a university, and dealing with students who use AI to cheat on assignments has become a core part of her job, and an endless source of frustration.

But cheating on coursework is only the tip of the iceberg. Universities have signed huge contracts with AI companies, which have been driving hard into the space, while K-12 public schools have adopted AI tools, sometimes disastrously. (Los Angeles Unified School District superintendent Alberto Carvalho’s home was raided as part of an FBI investigation into a multimillion dollar deal with an educational chatbot developer that failed within months). Such deals, like the California Statue University system’s $17 million partnership with OpenAI, or Ohio State’s policy to mandate all students learn AI fluency, are top-down initiatives that have left many educators working in the classroom backfooted. Teachers and students alike are being encouraged from all angles to adopt AI products, setting up new arenas for tension and conflict, and posing serious questions about the future of instruction.

AI has indeed flooded the profession, impacting tutelage, administrative work, counseling, testing, and beyond. And it’s already had serious ramifications for labor; as in many other trades, education jobs are being deskilled, degraded, and even lost outright to clients and bosses embracing AI systems. Librarians and tutors are watching as administrators and edtech companies embrace AI tools as a means of cutting their work hours. IT and HR professionals in the education space are competing with AI products on the market and speeding up work to match their output. Educators of every stripe worry that quality instruction and critical thinking skills are taking serious hits as AI provides an easy, if frequently incorrect, route to an answer. And ominously, critical AI programs are being cut just as universities turn to embrace chatbots. One instructor at the University of California at Irvine, Ricky Crano, wrote in to share his story of being laid off from a job organizing a series of seminars that examined the tech industry—just around the time the school was promoting its proprietary chatbot, ZotGPT.

Some educators are fighting back: The American Association of University Professors, a union representing academic workers, for instance, has called for faculty control over all AI decisions as a matter of policy, and AI has become a battleground in contract negotiations and campus life. Graduate student unions, librarians, and activists are organizing against administrations that have rushed to deploy AI.

I’ve heard stories like these, and many more. Last year, 404 Media ran a great roundup of stories told by teachers, and how they’re struggling with AI in the classroom. So with this, the fifth installment of AI Killed My Job, we’ll hear not just from teachers, lecturers, and instructors, but from education workers across the field—private tutors, student athlete coaches, librarians, HR employees, essay graders, and edtech workers—who have all had their jobs transformed by AI. These stories, some of which may sound familiar a few years into the AI boom, and many which will not, help paint a fuller picture of just how the technology has already impacted some of our most crucial institutions.

If your job has been impacted by AI, and you would like to share your story as part of this project, please do so at AIkilledmyjob@pm.me. The next installments will aim to cover healthcare, journalism, and retail and service jobs.

Subscribe now

Before we proceed, I want to share word of a project that might be of interest to readers. The Fund for Guaranteed Income is a nonprofit that researches and enacts, you guessed it, guaranteed income projects, and they’re working on “a support program for workers whose jobs or income have been affected by AI, designed with direct input from impacted workers,” FGI head Nick Salazar tells me. He asked if I might extend a call for participants with readers, and I’m happy to do so:

  • If AI has changed your work, you can share your story anonymously at aicommonsproject.org. Submissions directly shape what the program will look like, and anyone who shares will be first to know when it launches.

And now, AI Killed My Job: Educators.

This story was edited by .


I don’t have any reason to believe my employer will not replace me totally with AI

Tutor at a community college

I work at a community college as a tutor to students in ESL, English, and—more broadly—writing for other courses like major midyear essays for students in other classes or those working toward high school equivalency tests. My hours were halved after Trump DOE cuts, so like many other workers I was already in a precarious position.

It’s a fairly low-pay job that takes a lot of emotional labor, especially considering many of our students have not only been failed by the education system but are often literally hungry, might not have heat or AC, are refugees from war-torn countries, and/or are facing a constellation of other life challenges that make it very difficult to succeed. Many of my students survive on gig work and/or Amazon warehouse jobs. It didn’t surprise me when some of these first-year English students started using AI for their papers.

One particularly memorable student brought in her paper followed by the AI version written out paragraph to paragraph in her notebook. She wanted me to help her merge them. ChatGPT made a reference to a movie anecdote that doesn’t exist. This was a personal essay. This was early on—when the AI was so shitty, I sort of believed I could convince her of the value of her own story and voice—which I attempted to do for an hour. We really bonded, it felt like a win for the day, and then she had to go work at Amazon.

A few weeks later, she brought in an AI outline that made no sense and she could not explain which made brainstorming for her paper impossible. Her professor had given her an A [on the outline]. Shortly after, she brought in an AI-written paper that also made no sense and pulled from real sources, but ones that were not reputable and with references to quotes that did not exist. Other students had also begun to bring in oddly perfect personal essays.

The focus was frequently how to personalize them, to try to inject a tiny bit of who they are into the bland product AI had spat out. Sometimes I just couldn’t tell whether it was their own work or not.

I cannot express enough how important the human connection is at my job: these are people who rarely get the support they need and deserve.

That’s really at the core of the issue for me: This was one of my favorite jobs because I felt like I was doing a good thing, I saw students make wildly awesome improvements as writers over the months, and we built real relationships. I cannot express enough how important the human connection is at my job: these are people who rarely get the support they need and deserve. I’ve had students break down crying, I’ve talked one off the ledge. I spent a year decoding the insanity that is a green card application for another. One of the biggest barriers to success where I work is just signing in—basic tech literacy. If/when we switch to AI tutors, the lack of accessibility is one of many issues that is invisible to people who don’t do our job every day.

In the past when my students left, I had faith that they could succeed, that they’d really learned something—namely how to think critically and value their own story. We were able to work together because there was a basic foundation of trust. That trust is gone now, replaced by suspicion and frustration. It feels like my new job is how to help students cheat better.

At our latest professional development training, we were told that the college was piloting Khanmigo, an AI “learning assistant” (lol) for math. We were told to write down all of our fears about AI or in the chat and then push them aside. We listed out the fear of job loss the most, followed by loss of critical thinking skills, privacy issues, feeding a machine that steals our ideas and churns out mediocrity…Many tutors went out of their way to include links to sources like MIT. They also pointed out that AI had yet to improve productivity or make a profit. Our supervisors literally did not address any of this. The message was clear: *AI is here to stay and we have to adapt.*

We were told not to question whether students are using AI, to in fact assume they are, and tutor them on how to better use it. The “use cases” my supervisor included had students choosing between different AI rewrites of passages for whichever one is better and why. We’re supposed to encourage them to think critically about what AI spurts out. We’re also supposed to pretend that this type of tutoring makes any sense when students can just ask for suggestions, click apply all, and get on with it—or, as we know many do—just drop the assignment prompt into AI, mix it up in a few different models, and then ask it to dumb itself down a bit to sound more like them.

I don’t have any reason to believe my employer will not replace me totally with AI even though my supervisor insists they won’t. I know a machine can technically do my job and that AI is already making my job obsolete considering students don’t have to write anymore. My college has chosen to hire numerous adjuncts part-time, limiting how many full-timers they have, and I am one of them. I ultimately ended up taking fewer hours than they offered and contacting some old freelance writing clients to spend more time away from there. It feels like rejecting them before they reject me. My supervisor is giving us the “option” to lead AI workshops and said to think about it. I know the right answer is to say yes. I won’t.

—Lauren Krouse

[We followed up with Krouse before publication. She told us she had quit.]

We’re expected to accept work that is clearly not the student’s as if it were

Professor

I’m a professor in the California State University system, which was recently profiled due to its stated desire to be the “largest AI-driven university in the world.” I want to talk about academic misconduct.

Academic misconduct is when students pass off work that they haven’t done as their own. It used to be “when students pass off other people’s work as their own,” but now, students simply plug exam questions into a chatbot and copy and paste what it spits back to them. While academic misconduct has always been a problem on campuses, before AI, marshaling your evidence and presenting it to the student would trigger a confession, which could then become an opportunity to teach.

Now, one of two things happens: either the student confesses but doesn’t change their behavior, or they double down and insist that it’s their work despite the evidence you present. Students are convinced that the AI cannot be detected, and they refuse to listen when you show them the tells, insisting that it was their own work. I had a student make reference to three issues that were extremely relevant to the exam question but so far outside the realm of what we studied that the only way they would know to talk about it is through extensive self-study that would be reflected in their post-exam recollection. When the student had no idea what their own exam was talking about, but they insisted that they had written the exam answer, I was left relatively nonplussed.

I’m new to the CSU, so I haven’t had occasion yet to send a student over to the formal investigatory process. But in general, my experience at other institutions suggests that without a confession, administrators are loathe to impose any penalties for academic misconduct - and the mere fact of referral means that the student will be on guard against less formal sanctions (and in fairness, it would not be inappropriate to call those “retaliation,” so arguably the student is correct). But this means that when a student refuses to take responsibility for their work, that there’s usually simply no consequences whatsoever - and we’re expected to accept work that is clearly not the student’s as if it were.

There’s also the problem of mixed messages. While we as faculty are free to ban the use of AI from our classes, the university system is sending multiple messages that these are good and useful tools for students. Students are given a subscription to a bespoke ChatGPT bot for the university, and there are constantly workshops and continuing education sessions about “how to use AI for [this thing we have to do].” Combining the administration’s aggressively pro-AI stance with the easy availability of tools means that faculty protests usually fall on deaf ears - even after we show students the complete and utter uselessness of the tools for the purposes they want.

And our students are, frankly, primed to be the targets of AI flimflam. The CSU isn’t an open-admissions university but it is an access institution. This means that we admit students who are at-risk of failing out of higher ed, either because of lack of preparation, lack of resources, or lack of bandwidth due to work or care obligations. This means that a lot of our students struggle with the basic kinds of tasks that we assign them. The promise of an automatic task completer is deeply attractive, and the background and training of our students doesn’t really equip most of them to adequately assess the claims of the AI pitch.

A lot of our students struggle with the basic kinds of tasks that we assign them. The promise of an automatic task completer is deeply attractive, and the background and training of our students doesn’t really equip most of them to adequately assess the claims of the AI pitch.

I haven’t been fired and replaced by AI; we have a strong and militant union that is aggressively pushing back against the use of AI to replace faculty. But bargaining is concessions, and it’s not clear whether the administration will be willing to give ground on this issue given how strongly they’ve staked the institution’s future on it. The job has changed due to AI, and combined with all of the other assaults on American higher education, I don’t know if my career will remain on its current track long enough for me to get tenure.

—Anonymous

My university used to WorkDay’s AI to streamline me out of job

Adjunct professor and HR worker at a university

I worked in HR for a university, handling the paperwork for our adjunct professors. Contract hires. And I was an “HR Partner” in addition to my role as an office coordinator.

Anyway, I’m an adjunct myself. Or I was. I taught writing, one quarter per year. A one-credit class. Amounts to about $700. Before taxes.

I took a job at my alma mater, and when it faced financial turbulence, the university responded by laying off 40% of the full-time faculty. They told me I’d be extra busy now, what with all the new adjuncts coming in (cheap labor).

And I was.

The day that HR called me into the Dean’s office to lay me off, I asked them a simple question:

“Who is going to work with all of the adjuncts—the people I handle paperwork and onboarding for every quarter? The people I talk to every day, helping them sort out their classes, keys, syllabi, schedules, and miscellaneous concerns?”

“Oh-oh-oh, WorkDay will do that!” the HR rep told me with a smile. WorkDay is an AI platform for streamlining work. It certainly streamlined mine.

—Jason M. Thornberry

Refusing to use Copilot cost me my job

IT professional at a university

For the past 2 years I’d been working as an IT Professional II at TAMU AgriLife, an organization under Texas A&M University that conducts research and programs related to agriculture and life science. In November 2024, I was moved from the department I’d been working as an IT Professional at since March 2023 to a new department under a new manager. This new manager made it clear from the start that he was obsessed with generative AI, telling me that his department was the place to be if I wanted to learn how to use gen AI. I, however, have never been a fan of generative AI, and had been working my job well for 2 years without it. As soon as I arrive in this new department, this new manager tries to subtly push me into using ChatGPT, Copilot, and Grok to do my work of helping others with technology problems, but every time I was asked I politely declined.

Then, in late March 2025, I had an employee evaluation with this new manager. As soon as the evaluation begins he starts heavily trying to push me into using generative AI, saying it’s “the way of the future” and that “everyone who doesn’t use it will be left behind.” When I tell him I have no interest in using AI, he says that I “better start or I’ll have a hard time finding or keeping a job”. He then tells me he’s going to get me a Copilot license, and that I must take a training course to use Copilot. I tell him that even if I take a Copilot course, I won’t use it in my work and thus buying a Copilot license for me would be a waste of company money. The employee evaluation continues, and he keeps trying to pressure me into using AI, but I continue to decline.

Then, on the following Monday in early April, he sends me a message that he’s gotten me the Copilot license (despite me telling him not to) and tells me to pick a day to take Copilot training. I reiterate to him that I have no interest in using Copilot and try to continue doing my job, but he sends more messages to me in an attempt to try and make me take the Copilot lessons. That same day as I am working, one of my coworkers suddenly gets up from his desk and starts yelling excitedly to our manager about how he had just used generative AI to make a musical about “an impregnation ninja who fertilizes every woman on the planet.” My manager responds by laughing it off like it was nothing. As soon as I get the chance I have a private conversation with my manager to tell him that I have problems with my coworker using generative AI to make explicit musicals in the workplace, but this manager says that this coworker “has always been kind of a degenerate” and that “I’ve told him to stop, but the AI is a tool, so it’s up to him how to use it”...meaning this has happened before, and he’s done nothing about it.

The week continues until Friday, when after asking me again if I’ve signed up for the Copilot training and I reiterate that I haven’t, this manager sends me an email saying that if I don’t take the Copilot training I will either be disciplined or potentially terminated. At this point I’ve had enough, so I call Human Resources and the boss of our IT company and report everything I just told you to them, and that my manager is trying to force me to use generative AI but isn’t doing anything about another employee making explicit works during work hours. Unfortunately though, both the boss and HR try to split this into 2 separate issues; the boss says that he will talk to my manager about allowing a worker to get away with making explicit works at the office, but that this manager can make me use AI if he wants and if I don’t like it I should “choose between my morals or my job”; and HR says they will wait to take action until after the boss talks to my manager.

When I tell him I have no interest in using AI, he says that I “better start or I’ll have a hard time finding or keeping a job.

The following Monday, during the second week of April, the boss of our company does have a private meeting with my manager, but I was never informed what was said at that meeting. I assume that the boss told my manager to back off from trying to force me to use AI, because this manager doesn’t mention the Copilot training courses at all for the next several weeks. Thus, I return to working my job and try to move past this whole fiasco.

Things settle down for the next few weeks, until April 23, 2025. On that day, this manager waits until everyone except for me and him are out of the office, and then suddenly asks if I took the Copilot training (after having not mentioned it in the past few weeks). When I say no, he tells me that the time for the Copilot training in April has already passed, and that because I missed the training I will have to either resign in 60 days or be terminated. I ask about taking the Copilot courses in May that were also being offered, but he doesn’t accept and reiterates that I have to choose resignation or termination. I decided to resign.

—Caleb Polansky

My students genuinely do not understand why they shouldn’t use AI

University Lecturer

I’m a lecturer in Psychology at a large private college in Dublin. At a recent meeting (zoom—of course!) our Data Analytics and Reporting manager asked what we all thought about getting AI to mark our students’ work, the things that couldn’t already be turned into auto-marked online MCQs etc, like long form essays. I pointed out that we are paid to mark the work we assign (it’s actually my least favourite part of the job, but that’s not the point) and asked if we could expect a reduction in pay as a result. I was told “We aren’t going to talk about that.” I should train the AIs to replace me as a marker but should not even be so bold as to wonder what effect that will have on my pay.

Obviously, students are using AI to write the assignments anyway. The idea that we can catch this kind of plagiarism effectively is pure fantasy. Increasingly my students genuinely do not understand why they should not use AI anyway... what is the point of ‘wasting’ days researching and writing an essay when the AI version will be as good or even better?

My question now is if AI is writing the work and AI is reading the work, do we even need to be there at all?

This whole profession never really recovered from powerpoint, this is just the nail in the coffin.

—Anonymous

The majority of the students learn nothing

Private computer science tutor

When the pandemic hit in the Spring of 2020, it was a catastrophe for students suddenly forced into remote learning, as professors were blindsided and desperately improvising… Begrudging acceptance of online education was indisputably bad for learning, but it did create demand for online tutors, and that has been my job since Covid.

The majority of students are behind on an assignment, and just want answers that will get them a decent grade. As a good tutor, my job was to try to redirect that desire toward actual learning, not just do their work for them. Then came the large language models. I got to watch some lower-performing students use them, and it was deeply concerning. They would type (or copy-paste) a description of the code they wanted into a coding environment, then accept whatever completion emerged so long as it compiled. If it did not, they would try again, with no ability to understand or correct the generated code. No learning took place.

I have seen far fewer students, as the casual, lazy ones who just wanted a B-level answer can now often get one from a coding LLM.

This past year I have seen far fewer students, as the casual, lazy ones who just wanted a B-level answer can now often get one from a coding LLM, since homework problems are well-represented in the LLMs’ training sets. They don’t understand (or care) that the point of their assignments is not to create more solutions to homework problems, but to teach them fundamentals of programming and computer science, and there is no one there to gently correct them. The less mercenary students still look for tutoring, and they are more fun to work with, but they are a minority, and unfortunately there are not enough of them.

—Sean

My colleagues have abandoned their values to board the AI hype-train

Librarian at an R1 research university

It is pretty understood in my department that we are opposed to AI in the research process but as soon as we leave our office suite, we are met by many AI advocates. I actually signed up for this substack after I had a disagreement with a professor while I was in the middle of teaching a session for one of their classes and felt like I needed to learn more detailed information on the way AI works. I told students that ChatGPT is not a search engine, and while I was technically wrong in that it has web browsing capability, I was correct in that it is terrible. Unfortunately for me in that moment I have integrity and can say things like “I don’t know that for sure so I will cede that to you and look into it more.”

Pettiness aside:

I think the change is coming fast. Every day there are new conference presentations and papers on why librarians should be using AI and how they can do it. Academic librarians are guilty of always trying to “prove our worth” and get on board with every new trend regardless of whether or not we should. And in the case of AI we absolutely should not. It goes against the core values of the profession as stated by the American Library Association: “Access, Equity, Intellectual Freedom and Privacy, Public Good, and Sustainability.” It violates all of the tenets of the ACRL Frameworks for Information Literacy. It is shocking to me how fast some of my colleagues have abandoned their stated values to get on board the AI hype-train. I get a bitter taste in my mouth every time I think about the ones that were giving land acknowledgements (maybe still are) and now champion AI.

Academic librarians are guilty of always trying to “prove our worth” and get on board with every new trend regardless of whether or not we should. And in the case of AI we absolutely should not.

At my university AI is being pushed from the top down. Leadership has openly stated that workers who do not use AI will get left behind. If there is organized resistance on campus, I haven’t found it outside my department. I know that there are many in the profession who are opposed, however.

I am not sure if I can say specifically that it is changing the way that patrons are experiencing the library. I would not be surprised if it was, though. I do believe it is changing the way that patrons perform research and increasing the likelihood that they will be satisfied with “good enough” or even just “well, it’s something.” I have heard students defend the position that chatbots have access to 80% of the internet and are gaining more every day. I don’t know where this belief comes from other than well crafted propaganda?

But I do want my last point to be this: I don’t blame people, especially students, for using these tools when they don’t know better. We live in a hell world with increasingly limited time for ourselves. ChatGPT and LLMs like it claim to offer them some of that time back. The way that the bots “talk” to them is with a sense of sureness and like the bot is their friend. It doesn’t offer critique of what the user does, it doesn’t challenge them. And while those things might feel comforting, it is cheating them of real learning.

Teaching is a relational process. Student and teacher should both learn from one another and with that comes friction. LLMs will do anything possible to eliminate that relational friction to maintain the comfort of users. So, what’s more appealing? The librarian telling you no, or the chatbot giving you all the “answers?”

—Anonymous

AI training programs are failing student athletes

Assistant Strength and Conditioning coach at an NCAA Division-3 University

I currently work as a part-time, hourly wage, no benefits, assistant strength and conditioning (S&C) coach at an NCAA Division-3 university. My hiring as a part-time assistant already represents a reduction in staffing, at least partially due to AI use. I struggle to get even adequate part-time hours, which may result in the future elimination of the position or my inability to keep the position. This is largely due to direct and indirect effects from AI use.

We use a virtual training platform to deliver strength training programs to the university’s hundreds of student-athletes. This offers several ostensible positives. Ideally, we write training programs in the app that students can access on their smartphones, which includes short gif exercise demonstrations and allows them to enter training data. Streamlining training program delivery can allow us to spend more time coaching and interacting more meaningfully with athletes versus writing the program in Excel, emailing/printing it, and spending most of our contact time reminding athletes how to read it and what the exercises are. The app guidance can help students achieve the training on their own when away from our coaching for various unavoidable reasons.

The app also facilitates a lazier and labor-cutting approach with their range of semi-responsive AI training programs. Provide some demographic details, such as sport, primary competitive season, gender of athlete, beginner/advanced, and any specific equipment exclusions, and the AI will generate as mediocre of a training program as one would expect of such broad inputs and limited knowledge of the actual humans and environment. In the typical AI use case, this is better than literally nothing, and it can receive human intelligence tweaks to go from mediocre to adequate. There is no source info available as to how the AI designs the program, namely, what are the purported differences between sports, genders, beginner/advanced, etc. that they are using to program? A capable human coach would be responsible for answering these questions. I’ve made some comparisons between close options, say male/female or baseball/softball or women’s/men’s lacrosse and found either no differences or arbitrary changes that I would be unable to explain.

In reality, we do not use time saved by AI programming to spend more time coaching and interacting with athletes and colleagues. We do not actually see all athletes or teams. Several teams do not participate in S&C at all. These sports often have team accounts set up on the AI program, but don’t know about them or use them. Even if they did, the programs are so clearly inadequate that I can’t in good faith recommend that they use it without modification. Some athletes do S&C on their own individually, while some sport coaches handle it themselves without us. We’ve ceded this ground as a staff rather than use time saved from programming to develop relationships with coaches and athletes who don’t inherently engage with us. This especially includes athletes and teams who aren’t traditionally enthusiastic about S&C: more women’s teams, endurance or more “niche” sports, and sports with chronically poor win-loss records.

Even if following an app was a direct substitute for in-person participation with qualified staff, this reduces our role to “programmers” instead of teachers.

I build my own training programs in the app and use the app only as the delivery platform. This improves the quality of my training programs, because I’m writing them for the actual humans in front of me, in our actual shared environment, considering their actual sport and academic schedules, instead of the AI estimations of those key factors. I feel that my relationship with the athletes is better, as we’re talking about the training and I’m taking their feedback and we’re working with it together and we see each other’s collected invested efforts. I try to communicate with sports coaches on our shared teams, to mixed results. Some appreciate the collaboration and it has improved my work and deepened my relationship with the team. Others seem to wonder why I’m bothering them. A human-intelligence approach also increases my working hours so that I can actually get close to a full 20-hour week. I would have more like 5-10 hours of “floor time” (ie. in the gym with athletes) only if I followed the head coach’s example to only use prompt-and-tweak AI programming.

I have seen numerous instances of poor quality training due to our use of the AI programs. Here are a few significant examples:

  • The AI programs are automatically set up to change exercises every 4 weeks. One team changed exercises during the week of their conference championship semi-final game. Sore legs were had by all, as changing exercises is known to increase muscle soreness and the new exercises were more intense. They played the semi-final that weekend to a highly fatiguing overtime win and then lost in the final on the following weekend.

  • The AI calendar follows pre-established program pathways from one physical focus quality to the next (eg. muscle size, strength, power, etc.). This resulted in one team doing a maximum strength phase (heavy weights, slow speed, high fatigue) during the final weeks of their competitive season, unadjusted for their game schedule. Many athletes simply did not follow the program.

  • The AI only sets a single competitive season, so it’s immediately inappropriate to use for athletes who have two competitive seasons over a year. Some sports have a split season of both fall and spring competition, either equally weighted or with one slightly prioritized over the other. Athletes on at least one team did high-fatigue hypertrophy (muscle-gaining) training during their spring in-season phase of faster pace, lighter bodyweight, more readiness-dependent performance demands.

Athletes often no-show to sessions with the provided reason that they can do it on their own with the app. Coaches often cancel sessions for the same reason. Even if following an app was a direct substitute for in-person participation with qualified staff, this reduces our role to “programmers” instead of teachers. Of course, as a staff we aren’t even “programming” because the AI program is.

Cutting in-person time eliminates our ability to provide actual instruction of physical movement, develop relationships, and create a quality team training environment. We know that these factors are actually what improve training outcomes, creates a rewarding athlete experience, and benefits life beyond immediate sport performance, not toiling away in isolation guided by an app. Session cancellations, reduced attendance, and low communication also reduce my enjoyment of the job, working hours, and income.

—Anonymous

Subscribe now

AI is causing the most damage in student learning and skill development

Academic

I’m an academic. I work in the UK but I spent a decade in higher education in the US and was tenured before moving across the Atlantic. I didn’t have a lot of AI to deal with before my current role, obviously ChatGPT kind of started this trend.

Firstly, MetaAI has stolen every last thing I’ve ever published. Academic publishing doesn’t bring in a lot of money. My book royalties are 2-5% and my articles bring in nothing other than prestige (whatever that is). Academic journals, I should add, are very expensive to access as you may already know. I see nothing from that of course, it’s essentially free labor that my institution sort of expects me to complete but only in vague terms. Certainly if I don’t publish, I may “perish”, to borrow an academic turn of phrase.

In the classroom, student attendance is sporadic at best. In many cases this is because they no longer “have to” show up to class to pick up the material. It’s required to be posted online (again this could be made available to AI training, the material posted technically belongs to the institution), and also they don’t really need to learn anything to do their final papers. Here is where I think AI is causing the most damage: student learning and skill development.

I begin every new class with a whole spiel about learning how to do research and how to communicate research findings. I try to reason with them that they need these skills; simply using AI means they fail themselves even if they manage to pass the class. Their future boss is not going to pay them salary to input prompts to AI and email or print off the output. It never fully gets through. Now we see university leaders, clueless as to how to fight back against the deskilling that has further undermined the concept of higher learning, setting up degree programs in “such and such with AI”. Buzz word loaded plans are shared institutionally without anyone ever asking why. Ironically, one area where AI might do a sufficiently mediocre job is in university management, perhaps turning those meetings that could have been an email into actual emails.

It’s aggravating. I can’t even imagine how bad this is going to get before it gets better. If it does.

—Andrew

Bosses are rushing to use AI to implement “unstaffed opening hours” at public libraries and to deskill school librarians

I’m a library worker and union organizer, working at a public library service in Melbourne Australia. My job at the library is running tech help workshops, but on the side I organize my workplace and organize with other union activists across public libraries in my state.

I thought you might be interested in one of the specific applications of AI in libraries. While this hasn’t led to any significant job less yet, I think it points to the future of the sector. There are two twin technological threats currently facing public libraries in Australia and around the world, and both of these seek to replace (unionised) library workers. This is on top of a culture war on libraries and library workers with the fascist transphobes attacking public libraries for running drag storytime events or even just having queer books in the collection.

In Melbourne there is a rush by bosses to implement unstaffed opening hours at public libraries. While this hasn’t led to a reduce in staffed opening hours yet, once the technology is introduced it can and will be used to replace staff hours as funding gets cut. In addition to the threat of unstaffed libraries, the introduction of an ‘AI’ chatbot to school libraries is directly threatening the jobs of skilled librarians. Called “Book Bot” by Huey, this chatbot housed in an iPad with cutesy trimmings replaces the job of a librarian in helping kids to find appropriate books to read. The company is advertising it as a solution to underfunding and understaffing in schools.

Ironically this private company has received government funding to do this. While Huey’s Book Bot hasn’t been introduced to public libraries yet to my knowledge, taken with the technology of unstaffed library access there is a clear threat to public libraries and all of us who work in the sector.

—Taichen

[Editor’s note: Taichen also shared two briefing documents they’d put together with coworkers; one on AI chatbots in libraries, and another on unstaffed libraries, aimed at educating other library workers. I’m sharing them here.]

Teaching has become a bullshit job

Programming instructor at a community college

I now teach programming online at a community college where I spend most of my time trying to detect cheaters and fake accounts. There’s a whole racket around state and federal scholarships paying nonexistent students. AI facilitates the process of creating profiles and pretending to take classes.

Almost every faculty meeting is about training us teachers how to teach students to use AI rather than helping us teach students how to think and learn and write. Even teaching has become a bullshit job.

I may be able to turn it around for some of my students by showing them how they can build text adventure games and then automate the playing of their text adventure games with AI. But only the most disciplined students (and those without the time constraints of working bullshit jobs to pay off their education debt) will get much out of my course. An English teacher colleague was able to buck the trend and was able to shame his students into writing thoughtful essays. But it takes a lot of effort and skill that isn’t being taught to overworked teachers.

Universities are worse. Much of the funding for Social Media and AI impact research in psychology, sociology, computer science, etc, chins from big tech and most papers do not even critically question whether AI is reasoning at all, or the ethics and safety of teaching and using it. Lost in the noise are the authentic voices of Timbit Gebru, Melanie Mitchel, even Gary Marcus and the few impactful researchers questioning the inevitability of AI as a tool for hyper capitalism, fascism, and genocide. It’s like Hitler discovered the nuclear bomb before the US did. And now fascism is mainstream, almost unquestioned, inevitable. It’s not the power of AI so much as the power of technology to shape minds -- the capture of all sources of media and information and art. It’s just 1984, exactly as Reagan dreamed and Orwell feared. Fiction and art and news are no longer consumed as warnings coming from authentic smart human voices, they are just entertainment, brainwashing tools. And artists and teachers and workers have no alternative but to participate in the ponzi scheme or starve.

—Anonymous

AI Killed my job grading student essays

Grader

One of the first jobs I got out of college (2008, recession era) was grading student essays for standardized tests. Cool job, lots of retired teachers did it. We sat in a huge room for a few months, maybe 45-60 people, and scored every essay written by fifth graders in a state a few states away from where we were, based on pretty specific criteria. Just a temp job, to be clear.

Years later, this was a job I did during COVID, something that could be done remotely. But now the training, done online, only had 8-12 people in it, with some people flunking out of that training, and the work itself was scheduled only to last a few weeks. I learned that most of this essay grading was done by AI, and we were only getting the papers AI couldn’t quite handle.

—Brian Nicholson

The AI evangelists are tough to fight with

“Tech guy” working in a school system

I’m not a teacher but I am a “tech guy” in a school system. We’ve taken a very slow and steady approach with it, banning its use in all but very specific cases, and requiring teachers to define how they’ll be used. But there are students who use it without any regard for the right or wrong of it.

But what has frustrated me most about it is the teachers trying to push for more AI access. They want to use it as 1) AI detectors, and no amount of “that doesn’t work” has convinced them. Or 2) to grade and summarize papers.

And it drives me crazy when I say “does the student’s work differ significantly from their previous work?” they look at me like I have six heads. Like thinking about whether the student’s most recent paper reads like a “written-by-HR” ass document is a huge ask.

And as for point 2, there comes a time when I have to look at them and say “if the students are writing papers with AI, and you’re summarizing the papers with AI, then why are any of us in this building.”

To be clear, most of these folks are decent and only want the very minimal AI intrusion in their classrooms. But the few that are loudly in favor are driving me up a wall.

Like I said, most of the teachers (and students!) either want little to do with AI stuff, or just the barest minimum of streamlining a process they have difficulty with. And, for instance, our Special Ed folks are looking for uses of the tech that will help their students fill the needs they have. It’s noble for the most part, and I appreciate that they’re willing to listen to feedback and really talk through these things.

But the evangelists are tough to fight with. No amount of “these companies are hoovering up data” and “we have both legal and moral obligations to protect our students’ privacy” convinces them, even when we point out that they are not FERPA or COPPA compliant.

—B-rad

My team of education workers has been cut in half

Team manager at an edtech company

My current job is as a team manager at a small private education adjacent tech company where we’ve seen traffic steadily decline because LLMs can do what we do (even if it is more expensive, less reliable, and wrong more often than not).

To try to find our place in this new world, over the last year, we’ve seen a transition away from our normal creative and fulfilling development work towards generating massive datasets that we’ve sold off as “training data” for LLMs to many of the big tech companies, under the guise of getting them better at math and reasoning.

This work is often mind numbing and demoralizing and especially demeaning to have creative programmers turn out rote, repetitive work like this. Worse than the generation is the QA sweep (that devs got looped in on) of manually looking over these massive datasets to do a quality pass.

Moreover, while this project was sold to me as being a deal so lucrative that it would set us up for the next few years, and let us hire new developers, but once we finally got paid, it only just brought the budget back to zero. So it increasingly seems like this is the future of my work and this company.

We had one round of layoffs last year when these projects started that my team luckily avoided. However, I was just told that more layoffs are coming. I know several good developers who are looking for work elsewhere, but the tech industry as a whole seems to be on a bit of a hiring freeze, and many of us have health insurance needs or families to support that makes simply walking away a terrifying choice.

[We followed up with the contributor to see what’s changed since they wrote to us six months ago. They had this to say:]

My team has literally been cut in half since my last message. Some were let go, some reassigned to different departments, and some quit seeing the writing on the wall. One person was let go on the spot via email because they were having trouble with their pc, and the company decided to end their contract on the spot rather than help them debug the technical issue.

My company, at least my corner of it, creates k-12 and collegiate math and science education material. My team helped create dynamic and visual aids, and homework-helping walkthroughs to solve problems in math and science. These are used some in classrooms, but mostly by students doing homework. A lot of my team are former teachers and educators who find this work engaging and satisfying - they feel they are still contributing to the education of the next generation, just in a more indirect way than teaching. My favorite feedback I hear from people in the wild when they find out where I work is some variation of “Thanks for getting me through high school math!”

Gen AI has not replaced this work, but because sites like ChatGPT can do most of what we can do without manual development work, our internal priorities have shifted away from creating these programs towards other efforts. The few of us who remain are instead tasked with figuring out how to best integrate LLM technology into our already existing tools and functions. This involves building the runway as the plane is taking off - creating the tools we need to use, as we are using them.

Multiple people have had their future contracts tied to specific LLM-related projects, and I’ve been told in not so many words, that if the project fails, these people are gone. But without any clear roadmap or direction, let alone documentation for how to do such integration, I feel like they are being set up to fail. The deadlines for these are fast approaching, and hopefully what we’ve cobbled together will be acceptable, but even if they are, they will still be LLM-powered features, and thus have the same inaccuracy and inconsistency problems that plague all LLM projects. It would be embarrassing to release something that can be so incorrect at times, when this company is known for mathematical accuracy. It’d be like if your pocket calculator occasionally returned 2+2=5.

—Anonymous

Gen AI Edtech platforms “lovebombed” my client and then my contract was up

Edtech contractor

The last two years have been hell because working in tech education, you are fighting to make [clients] understand the risks and harms, and they all think they can just use a gen ai LMS to make the trainings that you make. We are increasingly seen as disposable, especially as women. I notice men in tech aren’t losing money making training, but we are. But that’s another story.

I had a contract up until this month to make IT trainings for an educational setting. I lost the role because they got love bombed, basically, by two large gen ai edtech platforms. All these promises of productivity and ease of content creation etc. It won’t work out for them long term but they don’t see that. Anyway, part of the sales pitch was how the gen ai can make training on policy and other tech areas—which was my job. They are so sold on this that they ended my 6 month probation with “we don’t see the trial as working”. I made incredible material for them, and worked in small groups and individually with staff on how to use the IT. Some of them had never used Word or Outlook before. They will now have terrible mandatory training too, full of errors and stolen work, but it will be generated in minutes. No one will proofread it or think like I do about the language, the accessibility. They just want easy quick work. And the saddest thing is they won’t even save money as they are paying the Twitter and TikTok influencers who work with these platforms TWICE what they paid me in one month to come and “train staff”.

—Michelle

The hiring committee requires applicants that would incorporate Copilot in their workflows. I didn’t get the job.

Teaching Fellow

I applied for a role as a “Ethics and Regulatory Coordinator” at the University of Auckland a few weeks back. the role seemed to require the applicant to act as a go-between for researchers making applications, committee members making decisions on what kind of research they’ll allow, and the university bureaucracy itself. The detailed job description includes a point about applicants being familiar with Microsoft Office, Including Copilot. As a final piece of background, ethics applications at UoA have been taking a while and some researchers have been frustrated with long wait times and inconsistent feedback from committees, while the committee is apparently sick of dealing with poor-quality research applications that require a lot of remedial work.

At the job interview, I was asked about my familiarity with using Copilot to create efficiency solutions in the Office. I gave a measured answer where I noted the usefulness of AI tools when summarizing spreadsheets and creating templates etc but said I didn’t trust Microsoft’s claim about data being separate and also stated I didn’t think we should use LLMs in decision-making or communications (email summaries and responses etc) for research ethics.

It took an unusually long time to hear back about their hiring decision, I had to email the relevant HR person to ask if something had happened. When I did get a call, I was told that they appreciated my experience with research design and ethics etc but they needed to find someone who was comfortable with incorporating Copilot into the ethics process and experienced with doing so.

—Benjamin Richardson


Subscribe now

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

The Dream of the Universal Library

1 Share

The Internet promised easy access to every book ever written. Why can’t we have nice things?

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Is the Internet Making Culture Worse?

1 Share

The decline of criticism might explain the sense that our culture is stagnating. How can we bring it back?

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories