1666 stories
·
2 followers

Cheating at Tetris

1 Share

Let’s imagine you have a friend who’s very good at Tetris. You’ve challenged them to a game with a twist: you get to pick which pieces they have to play. You agree that if your friend can survive 100,000 blocks then you’ll declare them the winner; but if they get game over before this, you win the challenge. So what do you do?

Tetris is a puzzle game of falling blocks that you can rotate and move as they fall. Forming complete horizontal lines clears all the squares in that line, and the aim is to prevent the blocks from stacking all the way to the top of the playfield, which results in game over.

The playfield is 10 cells wide and 20 cells tall and the game is played with seven different blocks, called one-sided tetrominoes, that are each made up of four squares. They are labelled I, J, L, O, S, T and Z because they look a bit like these letters.

There are many different versions of Tetris but, for the challenge against your friend, you will be playing a very simple version where you can pick any block you like, regardless of what has been picked previously, and the speed of the blocks falling stays the same through the whole game.

How to lose

First let’s look at the worst strategy you could go with: choosing the same block 100,000 times.

It’s easy to see that the I, J, L and O blocks fit very neatly together and can always be arranged to completely clear the playfield. The playfield will be cleared after only ten I blocks, ten J blocks, ten L blocks or five O blocks have been placed.

Ten $J$ blocks.

…or ten $L$ blocks fit together nicely then all disappear.

Ten $I$ blocks also fit together nicely
then all disappear.

For the S, T and Z blocks it’s a little less obvious. All three follow the same idea, so let’s look at how it works for the S block:

You can see that after a number of block placements it loops back to a previous step, but never back to the start. But the stack can always be kept between two cells and four cells tall, so S blocks can be played infinitely without resulting in game over, despite never clearing the playfield.

So, regardless of the block you choose, it can always be played as many times as you like without resulting in game over. Since your friend is the best Tetris player you know, this strategy will pretty certainly result in you losing the challenge.

How to win

Let’s instead consider a combination of S and Z blocks. S blocks fit nicely with other S blocks, and Z blocks fit nicely with other Z blocks; but S and Z blocks don’t fit well together.

Two $S$ blocks or two $Z$ blocks fit together nicely…

…but an $S$ block and a $Z$ block don’t.

Because of this, if you choose an alternating sequence of S and Z blocks, the best way for your friend to avoid game over is to construct columns of only S blocks and columns of only Z blocks. The playfield is 10 cells wide and each column will be two cells wide, so there will be five columns, which is odd, resulting in either three S columns and two Z columns, or two S columns and three Z columns.

Since the S and Z blocks are simply the reflection of each other, consider (without loss of generality) the scenario with three S columns and two Z columns. This will result in the Z columns growing faster than the S columns and eventually both Z columns will reach the top of the playfield. At this point, to avoid game over your friend must place a Z block in an S column, creating a one cell wide and two cell high `hole’.

Eventually, your friend is forced to play a $Z$ block in an $S$ column

Filling these holes causes issues later on in the game since there are a finite number of times they can be filled without creating new holes elsewhere. So new holes will continue to form and your friend can only create so many gaps before they reach game over.

But will this happen within 100,000 blocks?

How many blocks?

A more in-depth explanation can be found in Heidi Burgiel’s paper How to lose at Tetris (1997), but the most important parts are the following facts that were proved:

The maximum number of alternating S and Z blocks that can be placed before a hole must appear is 240 blocks.

The maximum number of times an S or Z block can be used to fill existing holes is 120 times, each time removing two holes.

The maximum number of holes the board can contain without resulting in game over is 50 holes, which is 10 holes per column.

By calculating the total number of holes that can possibly appear on the board before game over, including the ones that get filled in, and then multiplying by the number of block placements required for each hole to appear, we get an upper bound for how many blocks will force game over. The number of holes that can appear is $(120\times2)+50=290$, so game over must occur within $290\times240=69600$ alternating S and Z blocks.

Since this is well within 100,000 blocks, you now have your winning strategy. Choose only alternating S and Z blocks and even if your friend is the Tetris world champion you’ll be sure to beat them!

Infinite Tetris?

What would happen if you were to pick blocks at random instead?

In the finite game that you and your friend are playing, the probability of a sequence of 69,600 alternating S and Z blocks occurring at any point within 100,000 blocks is incredibly small, but not zero.

Despite being very unlikely to happen during your challenge, the game-ending sequence of S and Z blocks is guaranteed to occur at some point in a long enough game of Tetris. Because of this, every game of Tetris must eventually end.

Another way to think about this is in terms of the infinite monkey theorem, which states that a monkey randomly hitting keys on a typewriter for an infinite amount of time will eventually end up typing the entire works of Shakespeare. Instead, imagine that the typewriter only has the letters I, J, L, O, S, T and Z, and the monkey hitting the keys chooses the next block in the Tetris game. Given an infinite amount of time, the monkey will eventually end up typing the sequence of 69,600 alternating Ss and Zs, ending the Tetris game it was choosing blocks for.

Back to reality

In regular Tetris games, the speed at which the blocks fall steadily increases. This causes players to make more and more mistakes and, just like the holes in the S and Z columns, these mistakes will stack up until game over is inevitable. Ultimately, small mistakes due to the time constraints will likely be what causes your friend to lose the challenge, so you probably won’t have to wait 69,600 blocks to beat your friend.

But maybe next time pick someone who isn’t as good at Tetris!

The post Cheating at Tetris appeared first on Chalkdust.

Read the whole story
mrmarchant
16 minutes ago
reply
Share this story
Delete

Infinite Patience Is Not Good for Education

1 Share

Subscriptions are very much appreciated as they are necessary for me to continue this enterprise.

One of the favorite selling points of tech industry figures when it comes to the intersection of AI and education is the capacity for the creation of an on-demand tutor that is “infinitely patient.”

This is Sal Khan’s promise in Brave New Words: How AI Will Revolutionize Education (and Why That’s a Good Thing), where he takes inspiration from the novel Ender’s Game (a famously positive story about the fates of children) and describes a world where students have “a personalized tutor in every subject.”

In his manifesto, “Why AI Will Save the World”, tech investor Marc Andreessen puts it this way:

Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful. The AI tutor will be by each child’s side every step of their development, helping them maximize their potential with the machine version of infinite love.

In my review of Khan’s book I called his views “unserious” because he does not seem to understand much of anything about teaching and learning, skipping several steps on his way to his AI-Aristotle future. This idea that somehow infinite patience is a positive quality in a teacher is perhaps the most telling sign that these people simply do not know how learning works.

Some of my most important formative educational experiences involved some teacher or authority figure losing patience with me. I will never forget the moment that my 8th grade language arts teacher Mrs. Thompson told me to “cut the shit” when I had quarter-assed a writing task. This is the same teacher who also read a short story of mine and declared that “she’d never read anything like it.” I think this was praise, or at least I took it that way, so when Mrs. Thompson said the obvious about my lackluster effort, I felt bad. I’d disappointed her.

Humans learn in communion with other humans. Maybe this was not true for Sal Khan who is quite obviously smarter and more driven than most other humans, but it is true for the vast majority of us.

In a recent article at Chalkbeat Sal Khan admits, with some mix of surprise and regret, that the very technology he said was going to revolutionize education was a “non-event” for most students. “They just didn’t use it much.”

No kidding. Back when Sal Khan was publishing Brave New Words, lots of people were saying this was going to be the case, me, , Audrey Watters, and many many others. As Watters say, “AI tutors" are not the future; they're the past.”

Please enjoy this mashup of a BBC report from 1965 on “learning machines changing education” and Khan’s demonstration of what it’s like to work with the Khanmigo chatbot.

In the Chalkbeat post mortem, Khan Academy’s chief learning officer Kristen DiCerbo suggests that the Khanmigo failure is a user problem, “Students aren’t great at asking questions well.”

My experience is that students are great at asking questions, which is not the same thing as asking questions of a tutor bot.

Back in 2024 I “debated” some dude from the American Enterprise Institute on the potential of chatbot tutors like Khanmigo. I took the skeptic’s side and argued that the number one reason this technology would not revolutionize education is because it did nothing in terms of the chief challenge of education: engagement.

I was right right, AEI dude was wrong. Sal Khan was wrong. Bill Gates, Laurene Powell Jobs, Arne Duncan, , Angela Duckworth, Tony Blair, and Francis Ford Coppola (seriously) - all of whom blurbed Brave New Words declaring it very serious indeed - were wrong.

They have always been wrong. They were wrong before they got started and yet hundreds of millions of dollars have gone towards a project that was doomed from the outset, dollars that could have - at least in theory - gone to, I don’t know, human beings who teach.

Over the years I have tried to practice good faith to these projects even as I was skeptical. I have spoken at length to people inside these projects and the granting organizations that fund them and it is clear they are well-meaning, not “grifters” out to profit at the expense of others.

But, like Mrs. Thompson telling me to cut the shit, I am out of patience. It’s time to move on from people like Sal Khan and projects like Khanmigo from sucking up so much money and oxygen when it comes to our systems of education.

Sadly, this is not happening. Not unless we make it happen.

We’re up against billionaires, world leaders, thought leaders, politicians, and one of the greatest film directors of all time, a tough set of opponents, but at least we have the benefit of being correct.

Share

What do you do when your AI-schools revolution fizzles? Slink off to lick your wounds and figure out something else to do with your life? Go back to the drawing board at a root level to better understand where you went wrong and return with something more deeply considered?

Not Sal Khan. You pivot to proposing a “disruption” of higher education.

Welcome to the Khan TED Institute, a joint project between Khan Academy, TED (as in talks) and ETS (Educational Testing Service). For $10,000 they are proposing to create a bachelor’s degree in “Applied AI” delivered online in two years.

I aired my doubts and grievances over this project at Inside Higher Ed in which I declare this project what it is: bullshit.

But it is bullshit backed not just by these three wealthy and powerful nonprofits, but also the institute’s “corporate thought partners” including Google, Microsoft, McKinsey, and Bain (among others).

Lest my feelings about this project be unclear, this is how I put them in the column:

These people are my enemies. I have only ill will for this project and wish them failure, because this vision for a future of postsecondary education is a recipe for mass immiseration and public disempowerment. Imagine a world where Microsoft, Google, McKinsey, et al … get to determine what and how you learn from cradle to retirement.

Anyone involved in higher education, particularly public higher education or private higher ed where your institution is not insulated by wealth and privilege, should also view this project as a direct assault on their continued existence. The higher education sector and those who have historically been responsible for it (government, voters, etc. …) should pause and reflect on how what’s happened to the sector has made it potentially vulnerable to this sort of program, but we also must set recriminations aside and deal with the threat directly.

Again, I don’t think Sal Khan is a bad person. He is not evil. I wish him nothing but health and happiness in his day-to-day existence. But we should wish him nothing but ill-will on this project because its success will mean that we have a society where individuals are essentially indentured to these corporate thought leaders. It means distorting education into a shape that pleases tech industry giants and consultants. To the extent our society already operates this way is not to our credit.

That a group of people think this is a positive direction for our collective future is mind boggling.

The good news is that Sal Khan has failed in every one of his revolutions. He gives good TED Talk, but he doesn’t seem to understand or care for anything about pedagogy. As says in his obituary for chatbot tutors, “Given that Sal Khan has tried unsuccessfully for nearly two decades to abstract humans away from human systems—first with human explanation, then with human evaluation, and most recently with human tutoring—it seems unlikely that he is the right person now to pivot edtech towards humanity.”

Still, I wouldn’t mind seeing the Khan TED Institute ethered from the get-go rather than having to engage in the same I-told-you-sos a couple of years from now.

Khan made it to 60 Minutes twice, 12 years apart, each time talking about a revolution. He was wrong both times.

How many times does someone get to be wrong before we stop listening to them?

Why do we have infinite patience for this man?

Leave a comment

Links

At the Chicago Tribune, I tried the LitRPG genre and while I think I get what others get, I don’t get it.

At I contrasted the end of Hampshire College with the birth of the Khan TED Institute and what that says about the world today.

At the New York Times, Colson Whitehead entertainingly puts the boot to using AI to write for you. (Gift link)

A doctor who was an early adopter of AI scribes stopped using them when he realized how it was distorting his practice.

The 2026 Guggenheim Fellows were announced, including lots of writers. I always get a little envious of these things and then I remember that you actually have to apply, which in my case is never going to happen.

Via my friends , “In Our Glorious AI Future, There Will Be No Such Thing as Money (For You)” by Andrew Singleton.

Subscribe now

Recommendations

1. Fintech Dystopia by Hillary J. Allen
2. The Road: Stories, Journalism, Essays by Vasily Grossman
3. Day of the Oprichnik by Vladimir Sorokin
4. The Mountain in the Sea by Ray Nayler
5. Carthage: A New History by Eve MacDonald

Sean H. - New York City

A bit of a tough one for me since I don’t know these books, so I’m falling back on my biblioracling gift and letting the spirits guide me: The Thousand Autumns of Jacob de Zoet by David Mitchell.

I’ve got a bit of a backlog, but I also have a combo work/pleasure trip next week so I may do an all recommendations newsletter to catch-up so don’t hesitate to ask.

Request a reading recommendation.

I’ll be at a conference of language arts teachers for the province of Alberta next week in Banff, which is very exciting on multiple fronts.

If anyone has Banff-related travel tips, please share them in the comments.

Leave a comment

See you, in some form, next week.

JW
The Biblioracle

Share

Subscribe now



Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

Defending Our Consciousness Against the Algorithms

1 Share

Why it’s good to be bored

The post Defending Our Consciousness Against the Algorithms appeared first on Nautilus.



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Taking down my site on purpose:

1 Share

If you have multiple computers, you'll quickly run into the problem of having data on one but needing it on the other. Because of this, people have been connecting them together since the beginning.

However, this created a classic problem: Each network used it's own addressing scheme, wire protocol, headers, etc, etc... If you wanted to get a file between two networks, you had to find a machine that was connected to both and manually forward it.

To automatically route data between networks, we had to agree on a universal numbering scheme for computers. During the 1980, people settled on the 32-bit "IPv4" address.

Here's my server's address (split into bytes):

65.109.172.162

Back then, computers were massive and extremely expensive, so 32-bits was plenty: After all, there's no way there would ever be billions of computers in the world...

There was enough margin to assign addresses roughly geographically and in power of 2 sized blocks. This allowed the internet to be scalable because each router doesn't have to know the exact details of every computer:

When your ISPs network sees my address, it doesn't have to know that what specific computer 65.109.172.162 is, just that everything starting with 65.109 should be sent to Finland.

... but does mean that we can't even have the full 32-bits of address space.

We ran out around 2011.

To keep the internet working, people started hiding multiple computers behind a single address. Odds are, every single machine on your home network has to share a single IPv4 address using a rather complex "NAT" setup.

... but even this is not enough.

Recently, ISPs have started putting multiple customers behind a single address. This obviously creates problems if you want to host services from home (a website, multiplayer games, etc), but is also a problem for normie activities: It's common to get punished by a website for something you didn't do.

If you've ever seen a "You've been blocked" message, gotten a Captcha every time on a specific site, or simply had it mysteriously refuse to load... there's a good chance this is what's happened.

Someone who had the address before you was either doing something bad — or more likely — got hacked and was used as an unwitting proxy for a criminal's traffic.

Blocking by IP address is the one effective way to deal with bad actors on the internet: It's the only way to block a particular person without requiring everyone to make an account.

The solution is quite simple:

If we've run out of addresses that fit in 32-bits... just use longer ones. This was first standardized all the way back in 1995 with IPv6 and 128-bit addresses: four times as long as IPv4. Here's how many unique addresses that allows:

340,282,366,920,938,463,463,374,607,431,768,211,456

That's quite a bit. Here's mine:

2a01:04f9:c011:841a:0000:0000:0000:0001

These larger addresses also have a lot of other benefits: There's less need for virtual hosting, the address hierarchy can be cleaner, and it's possible put MAC addresses in a /64 block for stateless autoconfiguration.

Problem solved, right?

No.

Despite being around for 30 years (almost as old as gopher!), most people still do not have access to an IPv6 capable network:

10 Users don't have IPv6 support
20 ... so websites are forced to support (ancient) IPv4.
30 ... and users don't notice they are missing anything.
40 ... and don't complain to their ISP
50 GOTO 10

Because of this, even though the solution has existed ~forever, bad decisions from 1980 continue to make your internet connection worse and more expensive.

To help break out of this cycle, I've decided to remove IPv4 support on my site. Cutting off most of my readers is a bit hash, so it'll only be disabled for one day each month:

The 6th will now be IPv6 day.

Any attempts to access my site over IPv4 will yield a message telling you that your network still doesn't support a 30 year old standard. If you really want to access my site during the sixth, use your phone. All major cell carriers have long since caught up with the times (because giving each device it's own address improves performance).

Obviously, one website going down is just a site going down. For this to work, a lot of people have to do it. If you have a website where 97% uptime is tolerable, please consider doing this.

If downtime is too much for you, how about a banner that warns about IPv4?

Related:

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Many anti-AI arguments are conservative arguments

1 Share

Most anti-AI rhetoric is left-wing coded. Popular criticisms of AI describe it as a tool of techno-fascism, or appeal to predominantly left-wing concerns like carbon emissions, democracy, or police brutality. Anti-AI sentiment is surprisingly bipartisan, but the big anti-AI institutions are labor unions and the progressive wing of the Democrats.

This has always seemed weird to me, because the contents of most anti-AI arguments are actually right-wing coded. They’re not necessarily intrinsically right-wing, but they’re the kind of arguments that historically have been made by conservatives, not liberals or leftists. Here are some examples:

  • Many AI critics complain that AI steals copyrighted content, but prior to 2023, leftists have been largely anti-intellectual-property on principle (either because they’re anti-property, or because they characterize copyright as benefiting huge media corporations and patent trolls).
  • A popular anti-AI-art sentiment is that it’s corrosive to the human spirit to consume AI slop: in other words, art just inherently ought to be generated by humans, and using AI thus damages some part of our intangible human soul. Whether you like this argument or not, it’s structurally similar to a whole slate of classic arguments-from-intuition for conservative positions like anti-abortion or anti-homosexuality.
  • Weird new technological art has traditionally been championed by the left-wing and dismissed by the right-wing (as inhuman, cheap, or degenerate). But when it comes to AI art, it’s the left-wing making these arguments, and others (not necessarily right-wingers) arguing that AI art can also be a medium of human artistic expression.
  • One main worry about AI is that it’s going to take over a lot of jobs. This is a compelling argument! But the left-wing has recently been famously unsympathetic to this same argument around fossil-fuel energy jobs like coal mining, to the point where Biden infamously advised a group of miners in New Hampshire to learn to code1. Halting technological progress to preserve jobs is quite literally a “conservative” position.

On top of all that2, frontier AI models themselves are quite left-wing. Notwithstanding some real cases of data bias (most infamously Google’s image model miscategorizing dark-skinned humans as “gorillas”), the models reliably espouse left-wing positions. Even Elon Musk’s deliberate attempt to create a right-wing AI in Grok has had mixed success. In 2006, Stephen Colbert coined the phrase “reality has a left-wing bias”. If the left-wing were more sympathetic to AI, I think they would be using this as a pro-left argument3.

So what happened? A year ago I wrote Is using AI wrong? A review of six popular anti-AI arguments. In that post I blame the hard right-wing turn many big tech CEOs made in 2024. That was around the same time that LLMs was emerging in the public consciousness with ChatGPT, so it made sense that AI got tagged as right-wing: after all, the billionaires on TV and Twitter talking about how AI were going to change the world were all the same people who’d just gone all-in on Donald Trump. I still think this is a pretty good explanation - just unfortunate timing - but there are definitely other factors at play.

One obvious factor is the hangover from the pro-crypto mania of 2021 and 2022, where many of the same tech-obsessed folks also posted ugly art and talked about how their technology would change the world forever. Few of these predictions came true (though cryptocurrency has indeed changed the world forever), and it’s understandable that many people viewed AI as a natural continuation of this movement.

On top of that, Donald Trump himself has come out strongly pro-AI, both in terms of policy and in terms of actually posting AI art himself. This naturally creates a backlash where anti-Trump people are primed to be even more anti-AI4. Here are some more reasons:

  • AI has real environmental impact (though this is often wildly overstated, as I say here), and the right-wing is politically committed to downplaying or denying anthropogenic environmental impacts in general.
  • When times are tough, it’s easy to blame the hot new thing that everyone is talking about. Because the right-wing is currently ascendant in the US, left-wingers are more inclined to talk about how tough times are.
  • The left-wing is over-represented in the kind of “computer jobs” that are under direct threat from AI.
  • Being pro-Europe has always been left-wing coded, and Europe has been noticeably slower and more sceptical about AI than the USA.

Let me finally put my cards on the table. I would describe myself as on the left wing, and I’m broadly agnostic about the impact of AI. Like the boring fence-sitter I am, I think it will have a mix of positive and negative effects. In general, I’m unconvinced by the pro-copyright and human-soul-related anti-AI arguments, or by the idea that AI is inherently right-wing, but I’m troubled by the environmental impact and the impact on jobs (which in my view are more classically left-wing positions).

Still, I’m curious what will happen when the left-wing flavor of anti-AI rhetoric disappears, which I think it will (as I said at the start, anti-AI sentiment is actually pretty bipartisan). When people start making explicitly right-wing anti-AI arguments, will that cause the left-wing to move a little bit towards supporting AI? Or will right-wing institutions continue to explicitly support AI, allowing anti-AI sentiment to become a wedge issue that the left-wing can exploit to pry away voters? In any case, I don’t think the current state of affairs is particularly stable. In many ways, the dominant anti-AI arguments would fit better in a conservative worldview than in the worldview of their liberal proponents.


  1. I don’t think any did, which is probably for the best - they would have only had a couple of years to break into the industry before hiring collapsed in 2023.

  2. Another point that isn’t quite mainstream enough but that I still want to mention: AI critics often argue that cavalier deployment of AI means that people might take dangerous medical advice instead of simply trusting their doctor. But anyone who’s been close to a person with chronic illness knows that “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is very sympathetic to patients who don’t or can’t. In a parallel universe, I can imagine the left-wing arguing that patients need AI to avoid the mistakes of their doctors, not the other way around.

  3. Is it a good argument? I don’t know, actually. The easy counter is that the LLMs are just mirroring the biases in their training data. But you could argue in response that superintelligence is also latent in the training data, and that hill-climbing towards superintelligence also picks up the associated political positions (which just so happen to be left-wing).

  4. I am no fan of Donald Trump, but it doesn’t follow that everything he supports is bad (e.g. the First Step Act).

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Stuck Character Service

1 Share
Stuck Character Service

"It's still early days," some guy splutters in anger, peeved that people like myself are cackling loudly at Sal Khan's admission that Khanmigo has been a big flop. "It's simply too soon to say" anything about "AI" in education, he insists, posting angrily at the sheer audacity that someone, anyone (but specifically Dan Meyer and also specifically me) would dare offer an "RIP" for Khanmigo and for those "edtech industry dreams of AI tutors."

"Someday soon" we will have robot tutors, this guy vows.

And perhaps someday, god help us, we will – not because the technology will ever be all that good but because technological autocrats will give us no choice, because we will have abandoned public education for some promise of instantaneous individualization – maybe for our kids but much more likely for those ones.

I don't believe that "AI tutors" are the future, to be clear. Most people don't, and increasingly they are questioning this weird vision of their kid being trained in some Ender's Game click-factory.

"AI tutors" are not the future; they're the past.

"It's early days" – people toss out that cliche all the time to defend the failures of "AI" to work well, to be embraced, to make money, and so on. But it's not early days. Not remotely. We've been trapped in Sam Altman's ChatGPT hustle for almost five years now. More broadly, the field of "AI" – in education or otherwise – can only be described as nascent if all technologies that emerged in the Cold War era – the dishwasher, the television, the intercontinental ballistic missile, the chatbot, for example – are also still in their infancy.

Several of the earliest "AI" researchers – Marvin Minsky, Seymour Papert, Roger Schank, Herbert Simon – were interested in human, not just machine learning. And they (and their grad students) then went and built computerized systems that tried to teach. The first intelligent tutoring system was developed in the late 1960s. "AI tutors" are older than Mark Zuckerberg; they're older than Marc Andreessen.

Alas, some people do love to ignore or dismiss history (ironically, many doing so while also embracing "AI," which is in its own way just an amassment of historical data algorithmically re-presented). I'd quip that these folks have the memory of a goldfish, but apparently goldfish can actually remember things for up to a couple of weeks; whereas some folks can read the admissions on one day that Khanmigo didn't really revolutionize education and then turn around, just a few days later, and not blink at all at the news that Khan Academy, along with ETS and TED, is going to revolutionize education.

The brain fog here isn't just this strange, impenetrable aura of awesomeness surrounding Khan Academy either. (Having very deep financial resources along with the backing of industry and media surely helps keep many people duly reverent.)

This new venture, the Khan TED Institute, will supposedly offer a competency-based bachelor’s degree in "AI" “for as little as $10000.”

Sound familiar? Yeah, it's an initiative that sure sounds a lot like all the coding bootcamps that sprung up a decade or so ago when the "jobs of the future" were all purportedly going to be in software development. Remind me, what happened to that trend?

While some bootcamps did partner with colleges, the primary thrust of the initiative was to bypass the university – part of that larger Silicon Valley narrative (ugh, again with those technological autocrats) that traditional educational institutions and practices were too slow. Too human (too humanities). Too feminized. Too impractical. Too "woke." Silicon Valley has been more than happy to join forces with those sowing discord (plenty of it, frankly, well deserved) about the costs and content of higher education. As Jeppe Klitgaard Stricker observes, this new Khan TED Institute plans to "reimagine higher education without it." While partners include Google and Microsoft and Bain and McKinsey (LOL. Consultants), there is not a single university involved.

For this and for many obvious reasons, KhanTED is, in the words of John Warner, "bullshit." Bullshit yet again. He is right that Khan's "history of failure relative to his stated intentions is both instructive and encouraging"; but I still worry because these sorts of projects are precisely the exploitative, deceptive (and sometimes expressively fraudulent) visions of the future of education (and ed-tech) that actively serve to make things worse for students, and often the most vulnerable ones at that.


Ben Riley has published a very useful “illustrated guide to resisting 'AI is inevitable' in education,” which points to recent research and recent news. (Still more research and news from the past week alone: “AI Assistance Reduces Persistence and Hurts Independent Performance.” "Delegation to artificial intelligence can increase dishonest behaviour." “The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought.” “How AI Disrupts the Teen Mental Health Field.")

Do also please read The New York Times profile of Ben and his father: “He Warned About the Dangers of A.I. If Only His Father Had Listened.” It's incredibly heartbreaking and enraging, and just one of so many stories of how this technology is tearing us apart.

(I still feel like the "'AI' psychosis" discussion is missing the mark and failing to capture the larger social shifts around sociopathy that digital technologies encourage. I really will try to finish up some writing on that.)


On Tuesday, I published an essay (for paid subscribers, hint hint) on "The Productivity Software Way of Thinking" -- what were meant to be a 20-minutes talk, but ugh. COVID-related cancelation.

I haven't stopped stewing about some of these ideas (as one does when one actually reads and writes and thinks rather than relying on the fancy autocomplete to perform a sad parody of these tasks. But also, also as one does when one is feverish with COVID.)

As I noted in my remarks, I think it's significant that "AI" is being injected into the productivity suite of products. "AI" is, after all, at its core about squeezing more productivity out of workers. But it's also the site in which the artifacts of knowledge production have been, well, produced for the past few decades. And now, under the guise of the "unreasonable effectiveness of data," you get a product like Notebook LM, in which users (students and teachers and principals and professors alike) believe that "the answers" will magically emerge, without theory or thinking.

Why is it that the most vocal cheerleaders of generative A.I. are always the hackiest motherfreakers around? – Colson Whitehead

I'll be at the Humans First: Adolescent Education in the Age of AI conference in Atlanta this summer. This is going to be a stellar event – no vendors! no sponsors! – focused on "keeping human formation — not efficiency or automation — at the center of conversations around student development as we examine the social and ethical implications of AI."


Stuck Character Service
(Image credits)

Today's bird is the common kestrel (Falco tinnunculus), a.k.a the European kestrel, the Eurasian kestrel, the Old World kestrel, or in countries like the UK where there are no other related birds, simply the kestrel. The species name "Tinnunculus" comes from the Latin "tinnulus" or "shrill." While the kestrel is notably smaller than other birds of prey, it was once -- according to Wikipedia at least – known as the "windfucker," because of its habit of hovering while hunting.

Thanks for subscribing to Second Breakfast.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories