1568 stories
·
2 followers

“Publishers aren’t evil, but they are desperate.”

1 Share

A meandering and messy, but otherwise an absolutely worthwhile essay from Shubham Bose about the bloat and hostile behaviours on news sites:

I went to the New York Times to glimpse at four headlines and was greeted with 422 network requests and 49 megabytes of data. […]

Almost all modern news websites are guilty of some variation of anti-user patterns. As a reminder, the NNgroup defines interaction cost as the sum of mental and physical efforts a user must exert to reach their goal. In the physical world, hostile architecture refers to a park bench with spikes that prevent people from sleeping. In the digital world, we can call it a system carefully engineered to extract metrics at the expense of human cognitive load. Let’s also cover some popular user-hostile design choices that have gone mainstream.

Bose has a knack for naming some of these hostile patterns: The Pre-Read Ambush stands for distracting you even before you start reading, Z-Index Warfare is about multiple pop-ups competing with each other, and Viewport Suffocation is about covering so much screen with crap you can barely see the content. You can almost see those names fly by on the massive screens in the final scenes of WarGames:

By the way, I didn’t know that the ad bidding is actually happening on my computer, using my CPU, and clobbering my interface speed:

Before the user finishes reading the headline, the browser is forced to process dozens of concurrent bidding requests to exchanges like Rubicon Project […] and Amazon Ad Systems. While these requests are asynchronous over the network, their payloads are incredibly hostile to the browser’s main thread. To facilitate this, the browser must download, parse and compile megabytes of JS. As a publisher, you shouldn’t run compute cycles to calculate ad yields before rendering the actual journalism.

The essay ends on a call to action:

No individual engineer at the Times decided to make reading miserable. This architecture emerged from a thousand small incentive decisions, each locally rational yet collectively catastrophic.

They built a system that treats your attention as an extractable resource. The most radical thing you can do is refuse to be extracted. Close the tab. Use RSS. Let the bounce rate speak for itself.

Funny you should say that. There is another user-hostile pattern not mentioned in the article, as it happens on the other side; the swiping back gesture on the mobile phone is hijacked to insert a frustrating “Keep on reading” page, rather than getting you where you came from:

It’s there on many sites, from Slate to Ars Technica.

It usually shows cheap, attention-grabbing headlines (in the case of Ars Technica, the Linus Torvalds article was over a decade old!). I originally thought this was just a last-ditch attempt to keep me on the site, but when I asked on social, a reader suggested there is another reason:

It’s an SEO play. If you land on a site because of a Google search and swipe back to Google, it sends a signal to Google that it wasn’t the result you were looking for. So by forcing users to click a link on the page to read more than two paragraphs, it means the user is unable to swipe back to Google and send that negative SEO signal.

Even the bounce rate is not allowed to speak for itself.

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

The refreshing power of disagreement

1 Share

One of the most famous experiments in social psychology took place in the early 1950s. Solomon Asch, a professor at Swarthmore College, gathered together groups of young men for what he told them was an experiment in “visual judgment”. It was no such thing.

What happened is often known as the “conformity experiment”, but that is a misleading label for an oft-misunderstood study. Asch ran many variations on his experiment, and the most surprising and powerful lesson is not about the power of conformity, but about the power of disagreement.

Asch’s basic approach was to show two cards to a group of about eight people. One card had a single line on it: the reference line. The other card displayed three lines of different length. The task was a straightforward multiple choice, picking the line that was the same length as the reference line. This wasn’t hard; when people were asked to do this task on their own, they almost never made a mistake.

However, Asch was not asking people to do this task in isolation, but as a member of a group. Participants would be asked, one by one, to tell the rest of the group their answer. This made space for the possibility that experimental subjects would be guided not by their own eyes, but by the opinions of others.

The groups were asked to do this 18 times, but Solomon Asch had a trick to play. Everyone in each group was a confederate working for Asch, except a single unsuspecting experimental subject. This poor dupe would be sitting near the end of the line. The confederates had instructions to get the first two questions right and then unanimously agree on the wrong answer for most of the rest.

Imagine the jolt of surprise and anxiety as the experimental subject saw one person after another contradict the evidence of his own eyes. People felt real pressure to conform, with more than one-third of the answers matching the group’s delusion rather than the obvious truth.

Why? When debriefed, some people said they had changed their minds, figuring the group must be right. Others said they didn’t change their minds, but did change their answers, not wanting to spoil the experiment. Still others were staunchly independent, saying that they presumed the group was right and they were wrong, but felt a duty to call them as they saw them.

What fascinates me about Asch’s experiment is what happened when one of the confederates had been instructed to disagree with the group and give the correct answer instead. The answer: the spell of conformity was broken. People made only a quarter as many errors, with the error rate falling below 10 per cent. The pressure from the group had lost much of its power.

Even more brilliant was another variation in which Asch again instructed a confederate to disagree with the group. This time, however, the confederate was an “extremist dissenter”, giving an answer that was even more wrong than the majority consensus. The result? The experimental subjects generally gave the correct answer; their error rate was still below 10 per cent.

Asch had demonstrated three things. First, people will go against the evidence of their own eyes if contradicted by a unanimous group. Second, group pressure is much weaker if even a single person dares to disagree with the group. Third, and most remarkable: it does not matter if the dissenter is mistaken; dissent punctures group pressure either way. People are liberated to say what they believe, not because the dissenter speaks the truth but because the dissenter demonstrates that disagreement is possible.

I thought of Solomon Asch when I heard about a cookbook by Julia Child and Jacques Pepin, Julia and Jacques Cooking at Home. It’s full of the classics, but there are two very different recipes for each dish — one by Julia and one by Jacques. In the margins, each offers a jovial explanation of what the other cook has done wrong, why they made different decisions and what effect those decisions have on the final meal. It is, writes philosopher C Thi Nguyen, “the record of an argument — a rowdy conversation between friends”.

This matters because, as with Solomon Asch’s duplicitous experiment, it shows us that disagreement is possible. The two cases seem very different, not least because while there is only one correct answer to Asch’s visual perception test, there is more than one way to sauté a fish. Yet the disagreement is valuable either way, because it gives us permission to think for ourselves.

Many years ago I was involved in scenario planning for the oil company Shell. It was always a fascinating exercise, but I now realise that one of the most important strengths of the process was rarely discussed: there were always at least two scenarios, and all the scenarios were given equal status. This was Cooking at Home meets corporate strategy: the fundamental assumption was that there was more than one plausible future, and a rowdy conversation about the different possibilities unlocked a treasure chest of fresh thinking.

Charlan Nemeth is a psychologist and the author of No! The Power of Disagreement in a World that Wants to Get Along. She cautions against “contrived” dissent — for example, the Catholic tradition of having a “devil’s advocate” to argue against the canonisation of a putative saint. This sort of thing sounds good in principle, she argues, but in practice there is a limited benefit in a rote play-acting of disagreement. For one thing, everyone knows the devil’s advocate is just pretending, so nobody feels much pressure to persuade them to change their mind. “Role-playing,” writes Nemeth, “does not have the stimulating effects of authentic dissent.”

Yet some contrivances are better than others. Nemeth writes approvingly of an investment firm only making decisions after considering serious arguments both for and against a position. What makes this different from playing devil’s advocate? Perhaps the sense that the contrary arguments are not a game, but made in all seriousness.

Another contrivance is the idea of “red teaming” an idea — giving a group the task of trying to rip a new idea apart before that idea is adopted. Is this an empty ritual, or a serious practice? Depending on people’s intent, it could be either.

Contrived dissent is better than nothing, especially if the contrivance itself is taken seriously. But the most valuable form of dissent is authentic, even stubborn and brave. There is no substitute for finding one of those people who feel a duty to call things as they see them.

Written for and first published in the Financial Times on 25 Feb 2026.

I’m running the London Marathon in April in support of a very good cause. If you felt able to contribute something, I’d be extremely grateful.

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

In Math, Rigor Is Vital. But Are Digitized Proofs Taking It Too Far?

1 Share

In ancient Greece, Euclid showed that if you agree on a small list of preliminary principles, or axioms, you can use deductive reasoning to reveal all sorts of new mathematical truths. But although these early proofs, as mathematicians call them, were derived using the laws of logic, they sometimes also contained hidden, unstated assumptions or relied on misleading intuitions. In these cases…

Source



Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Why I (Still) Boycott AI

1 Share
San Francisco, California. (Photo by Justin Sullivan/Getty Images.)

A couple of years ago, when AI had come onto the market and was clearly reshaping the society, I decided that I would have nothing to do with it—I would boycott. This decision was based partly on morals and partly on practicality—I suspected that, underneath the really impressive technological breakthrough, AI was still, essentially, a one-trick pony. And now that AI has evidently gone through another revolution, vastly improving itself in recent models and opening up a whole world of coding to the public at large, it seems like I should have to re-examine my decision to boycott—which, actually, is one of the easiest decisions I’ve ever made. I am more adamant about it than ever.

I might be a lot more interested in developments with AI if I hadn’t already seen this movie. It played out across a great swath of my life in the form of social media. It featured all the same actors making all the same promises early on—and all of them following the same playbook to extract our attention, as if it were a mineable resource, and then selling that for ads, as well as whatever personal data we were unwise enough to leave unguarded on our own devices. I remember showing up on a college campus looking forward to young people connecting with each other; instead—Facebook had just come out—everybody seemed to be sealed up in their rooms carrying out a facsimile of social exchange. All this time later, has anything really improved? Every time I go into my bank account I seem to find some recurring subscription that I signed up for in the flush of tech optimism and long ago stopped using—but which still clings to my wallet like a barnacle to a ship’s hull.

Subscribe now

So what’s different this time around? Well, if in the previous installment of this series, the tech companies targeted social relationships, and hacked at the envy and FOMO and anxiety that animates social relationships as much as the substance itself, now they’ve made their way into a different kind of space—the deep privacy of people’s innermost lives. In the AI regime, people confess their darkest secrets to their LLM therapist, which spits back out what they want to hear—although without the legal confidentiality of actual therapy sessions, with everything forming a digital record, and with the tech companies, on a whim, sometimes posting the raw text of chats onto the open internet. People earnestly ask AI about the existence of God or the meaning of life in the way they might once have sunk down to their knees and searched the silence of their own soul. And people freely turn their most creative projects, sometimes their life’s work, over to AI under the bashful premise that they themselves are not that good at the thing they most love to do and it’s better handled by the bot.

Now that we’re a couple of years into this whole set of developments, the essentials of it—and the battle lines—are a bit more clear. It’s not really about what AI can do or what it can’t do, whether it will take all the jobs or won’t, whether it will destroy the world or not. The question is about agency—do you choose to exert agency in your own life, in the way that humans always have and were doing just fine with until, like, three years ago? Or do you prefer to turn it over to a machine, which really means turning it over to the data miners and the advertising innovators in the world’s largest tech corporations?


Maintaining an AI boycott, it has to be acknowledged, is getting progressively harder to do—and I am not always the strictest Luddite about it. I was hooked into various forms of AI before they started being called AI—Google Translate, Apple’s text predict feature, and so on. If I think I have cancer for some ridiculous reason, I now no longer need to go to all the trouble of clicking on WebMD to be told I don’t have cancer; the AI tools can do it with just a glance at the screen.

But... it’s really not so difficult either. Every so often—in a moment of weakness—I’ll download an AI app onto my phone and the process will begin all over again where I rack my brains to come up with some way in which the technology might actually benefit my life. So far, I haven’t been able to come up with anything. I like structuring and writing my own essays; I have no idea why I would outsource that to AI. The advice it gives always starts out kind of interesting—and there’s always a moment of being awed by the technology—before it reveals itself to just be gleaned from a couple of random pieces of information on the web. The actual zone of my life just seems to be in a place that AI can’t touch or help—unless I proactively turn myself over to the bot.



And meanwhile, in that time, I can feel the whole world around me doing exactly that. It’s very common to have a conversation where someone shares their great idea—only to reveal that it came from AI. Teaching (I teach journalism and public relations at an international university) has largely become a matter of trying to suss out what is AI and what isn’t—and, since AI became prevalent (and all my students use it), they have become distinctly lazier. They clearly feel that the AI is just better than they are and there’s no real point in trying.

The general impression is of the invasion of the bodysnatchers occurring all around me—if I deal with anyone, it tends to feel like they’re becoming a kind of frontman for AI-generated work but without the work itself becoming in any way obviously better.

The lecture I tend to give to my students isn’t moral at all. It’s just very practical. They may fool their teachers sometimes with AI work, but their teachers are starting to catch on—all it takes is running their work through an AI detector, or insisting on giving them paper-and-pen exams, to starkly reveal how much they’ve been relying on the technology. And they certainly won’t be able to fool their employers. If they show up in the workforce using AI for everything, their employers will of course take them at their word and simply replace their jobs with AI. If they use AI for their products, then they will be competing with rivals using exactly the same sets of AI tools and generating the exact same kinds of work—there will be no way to distinguish themselves or to get ahead.

For adults, the question is a little more existential. It isn’t so much how to adapt to the world but what kind of world you really want to build and transmit onwards. The value that AI presupposes is about optimization—about doing things really, really well. AI text is always very clean, there is never a hint of, say, a spelling mistake; it really (and it is obviously a work in progress, but AI is getting better at it all the time) can do just about any task to a high level.

Subscribe now

But whoever said that life is about optimization? If you simply shift your focus and presuppose that life is about enriching human experience, and finding meaning, then the value of AI almost instantly dwindles away. I’ve heard of people using AI to write their diaries—but the whole point of a diary isn’t to produce some sort of masterwork, it’s to record a particular day in the highly subjective, idiosyncratic way that lends meaning to you. A friend who works for a travel writing company told me that the boss had a meeting in which it was announced that they should “welcome in AI”—with the result that, only a few months later, virtually everyone at the company lost their jobs and the ones who stayed were basically there to check over AI-generated posts for hallucinations. But travel isn’t actually just a bundle of information—the whole point of travel is the relationship between you, the traveler, and the place visited—and the result of the travel industry’s turn to AI is that it now wouldn’t even occur to me to read one of their posts.


The assumption at the moment is that AI “is the future”—a phrase like that is the underpinning of just about any conversation on AI. I’m not sure actually. I could sort of imagine this spinning the other way, that the pace of LLM improvement slows down, that more and more of what they’re generating is obviously “slop,” and that the enthusiastic early adopters are the ones who will be caught out—who will be remembered in a few years for having downsized their workforce for the sake of a bot; or for having skipped out on their education, on a crucial period of investing in themselves, in order to generate work that turns out to be indistinguishable from the machine-generated work that everybody is producing.

But the ship really seems to be sailing on that fantasy of mine. AI has already moved into a “too big to fail” category—the economy isn’t producing much, but it is producing AI and that’s the horse that we’re collectively hitching our wagons to; the tech companies haven’t quite been able to figure out a clear use for AI, but they’ve been very good at generating stickiness—at persuading people to download the free samples of AI and to use it for one thing or another so that it already becomes unimaginable to write a student paper or to prepare a recipe without it.



AI may well push through the slight bottleneck it’s been in for the past couple of years and prevail. The dissenting voices are starting to get a little lonelier; when I make this point with people, the glazed look I get back tends to mean that my interlocutor has already turned AI into a habit. But even should AI win out in the culture wars, that actually doesn’t prove the larger point. AI isn’t actually some guaranteed future that we have to succumb to whether we like it or not. The adoption of AI rests on choices—on the individual choices of billions, and there is agency, actually, in deciding whether to use it or not, an agency that may be all the greater since we were fooled once by the tech companies’ marketing gimmicks in the high social media era of the 2010s, and should at least be wiser to the tricks they are pulling with AI in the 2020s.

At the moment, the conversation about AI is basically a lot of noise. Most of it turns on AI’s capabilities. The tendency to hallucinate and to generate confident slop were embarrassing for quite a while. The fact that it seems to be pushing past the worst hallucinatory tendencies is supposed to quiet the doubters. But the conversation about capabilities is, I am convinced, basically a distraction. The technology is already impressive and will continue to improve; that’s not in dispute. But cloning and nuclear technology are also impressive and have strict guardrails around them.

What’s at stake isn’t really technology—it’s about taking a good hard look at what our fundamental values are and questioning whether AI aligns with them. The question isn’t whether AI is a stochastic parrot or not; the question is whether you are.

Sam Kahn is associate editor at Persuasion, writes the Substack Castalia, and edits The Republic of Letters.


Follow Persuasion on X, Instagram, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.

And, to receive pieces like this in your inbox and support our work, subscribe below:

Subscribe now

Read the whole story
mrmarchant
7 hours ago
reply
Share this story
Delete

Escaping the Ogallala trap

1 Share

There is a closing window to stop driverless cars from creating omnigridlock.

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

Study: Sycophantic AI can undermine human judgment

1 Share

We all need a little validation now and then from friends or family, but sometimes too much validation can backfire—and the same is true of AI chatbots. There have been several recent cases of overly sycophantic AI tools leading to negative outcomes, including users harming themselves and/or others. But the harm might not be limited to these extreme cases, according to a new paper published in the journal Science. As more people rely on AI tools for everyday advice and guidance, their tendency to overly flatter and agree with users can have harmful effects on those users' judgment, particularly in the social sphere.

The study showed that such tools can reinforce maladaptive beliefs, discourage users from accepting responsibility for a situation, or discourage them from repairing damaged relationships. That said, the authors were quick to emphasize during a media briefing that their findings were not intended to feed into "doomsday sentiments" about such AI models. Rather, the objective is to further our understanding of how such AI models work and their impact on human users, in hopes of making them better while the models are still in the early-ish development stages.

Co-author Myra Cheng, a graduate student at Stanford University, said she and her co-authors were inspired to study this issue after they began noticing a pronounced increase in the number of people around them who had started relying on AI chatbots for relationship advice—and often ended up receiving bad advice because the AI would take their side no matter what. Their interest was bolstered by recent surveys showing nearly half of Americans under 30 have asked an AI tool for personal advice. "Given how common this is becoming, we wanted to understand how an overly affirming AI advice might impact people's real-world relationships," said Cheng.

Read full article

Comments



Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete
Next Page of Stories