Nobody does pasta quite like the Italians, as anyone who has tasted an authentic "pasta alla cacio e pepe" can attest. It's a simple dish: just tonnarelli pasta, pecorino cheese, and pepper. But its simplicity is deceptive. Cacio e pepe ("cheese and pepper") is notoriously challenging to make because it's so easy for the sauce to form unappetizing clumps with a texture more akin to stringy mozzarella rather than being smooth and creamy.
A team of Italian physicists has come to the rescue with a foolproof recipe based on their many scientific experiments, according to a new paper published in the journal Physics of Fluids. The trick: using corn starch for the cheese and pepper sauce instead of relying on however much starch leaches into the boiling water as the pasta is cooked.
"A true Italian grandmother or a skilled home chef from Rome would never need a scientific recipe for cacio e pepe, relying instead on instinct and years of experience," the authors wrote. "For everyone else, this guide offers a practical way to master the dish. Preparing cacio e pepe successfully depends on getting the balance just right, particularly the ratio of starch to cheese. The concentration of starch plays a crucial role in keeping the sauce creamy and smooth, without clumps or separation."
Perhaps one of the most pervasive longstanding technology conspiracy theories is that your smartphone is constantly listening in on your private conversations. Almost everyone at some point has felt the eerie synchronicity of seeing an ad served up on a social media platform that exactly corresponds to a recent conversation. It’s certainly unnerving, and the most simple explanation is one of direct surveillance. Of course Facebook, Google and Apple are all listening in on your private conversations with friends, catching key words, and then serving you tailored advertisements. And of course they would deny this is happening.
Look at the cereal aisle. A hundred kinds of sugar and grain and color. Look at your Netflix queue. Thousands of hours of people falling in love, falling apart, saving the galaxy. Look at your inbox, your resume, your city. More options every day. New apps. New jobs. New courses. New gurus. New upgrades.
And yet?
And yet somehow, we're more stuck than ever.
Freedom isn't just having options. Freedom is knowing what to do with them.
Options without clarity create paralysis. Options without purpose create emptiness. Options without commitment create regret.
We have confused "the ability to choose" with "the ability to live." It's one thing to be allowed to do anything. It's another thing to know which thing matters, to say no to the rest, and to actually do it. That's the heavy lifting.
We obsess over keeping our options open - as if decisions are expiration dates and commitment is a trap. We flirt with possibilities but marry none. We download and abandon. We swipe and swipe and swipe. We change majors, change jobs, change towns, chasing some invisible finish line where it all feels perfectly right.
Freedom doesn't show up when you have every door open. Freedom shows up when you walk through a door, close it behind you, and get to work.
You don’t get more alive by having more cereal to pick from. You get more alive by choosing breakfast and moving forward.
Freedom is scary because it demands ownership. It demands risk. It demands letting a thousand possibilities die so one can live. It asks you to stop auditioning your life and actually start performing it.
The world will keep handing you more options. It’s a business model now. It’s a feature, not a bug. Infinite scroll. Unlimited plans. Just-in-time everything.
The hard work is not finding new choices. The hard work is finding your choice.
And standing by it long enough to make it real.
Westenberg explores the intersection of technology, systems thinking, and philosophy that shapes our future—without the fluff.
Free readers get powerful ideas. Paid subscribers get more:
Exclusive in-depth essays
Early access to new work
Private discussions and Q&As
Future digital products and resources
The satisfaction of supporting independent thinking
$5/month or $50/year. No sponsors. No bullshit. Just valuable insight delivered directly.
I made a website. It's called One Million Chessboards. It has a million chessboards on it. Moving a piece moves it for everyone, instantly. There are no turns.
Tressie McMillan Cottom recently offered a neat little summary of how universities have responded to generative AI: "Academics initially lost our minds over the obvious threats to academic integrity. Then a mysterious thing happened. The typical higher education line on A.I. pivoted from alarm to augmentation. We need to get on with the future, figure out how to cheat-proof our teaching and, while we are at it, use A.I. to do some of our own work, people said."
Although others might have "got on with the future," I’m still stuck on those “obvious threats,” I confess – and not just to academic integrity but to integrity in general.
What happened, I wonder, that prompted this narrative shift, that made AI – a tool of deception (see: the very substance of Turing Test), if not one for cheating – no longer an educational or even a social concern?
I mean, we know the answer, at least in part: what happened is money – economic precarity and fear of economic precarity that has everyone chasing the rapidly disappearing research and investment dollars and employment prospects that now are not utterly beholden to AI hype.
But maybe something else has shifted too, quite generally, with how we feel about honor and integrity.
Even if we just look at AI in school, I think it's still worth asking, what happened to those concerns about widespread cheating? Have they grown? Or have they been resolved? Or have they been dismissed? Perhaps it's just a reflection of a louder refrain that those promoting AI in education repeat: that cheating isn't actually a problem. Or at least, no more a problem than it's ever been. We can’t blame generative AI for cheating, because students have always cheated.
If and when students do cheat, advocates of AI go on to argue, it's not the fault of the technology; the blame lies mostly with teachers and with their antiquated teaching practices. Cheating is the result of not making assignments relevant, meaningful, and/or "cheat proof," which educators should have done well before now anyway. Shame on them, serves them right, or something like that.
Of course, this distinction presumes that technology and culture are distinct, when they surely are not. Technology and culture are inseparable, shaping and shaped by one another.
A new AI startup called Cluely released an ad last week that, as Garbage Day’s Ryan Broderick put it, marks “a true low point for the human race.” It certainly underscores how central deception and dishonesty are to the culture of AI and of computing technology and Silicon Valley more broadly.
Earlier this year, Lee posted a video of him using the tool, initially called Interview Coder, to cheat during the coding portion of an internship interview with Amazon. After the video went viral (Amazon had it taken down with a copyright claim), Lee and his co-founder Neel Shanmugam faced disciplinary action from Columbia University – a one year suspension for Lee – where they were students; they’ve now both dropped out to become entrepreneurs apparently, something that speaks volumes, I'd say, about entrepreneurship.
“This tool cannot be used to cheat in class, except in some really artificially contrived scenario that has never existed before in a Columbia class. And that’s kind of what they’re trying to get at in a very artificial way,” Shanmugam said. “So it felt like they’re trying to discipline something they had zero right to discipline. But they’re basically reciting academic honesty, and [Interview Coder] does not let you do that.”
One way to read this is cheating and "academic integrity" are contrivances, made-up scenarios of meaningless gotchas. Honesty, integrity – these seem to be irrelevant here, with or without the appellation of "academic." And those who demand integrity – culturally or technologically, academic or otherwise – are overreaching.
While Lee initially claimed that the Amazon stunt was meant to draw attention to the problems with corporations using LeetCode-style interviews for programmers, he has now fully embraced the “cheating” angle as the key selling point for his new tool. He's posted a manifesto on the company website that positions Cluely as part of the legacy of the calculator, spellcheck, and Google, which are all described as technologies of cheating. Indeed, all technological innovation is in at first a kind of trick or deception. “Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it's normal.” The work of the future, the Cluely manifesto declares, will not involve “effort,” but shortcuts.
“So, start cheating. Because when everyone does, no one is.”