1663 stories
·
2 followers

Taking down my site on purpose:

1 Share

If you have multiple computers, you'll quickly run into the problem of having data on one but needing it on the other. Because of this, people have been connecting them together since the beginning.

However, this created a classic problem: Each network used it's own addressing scheme, wire protocol, headers, etc, etc... If you wanted to get a file between two networks, you had to find a machine that was connected to both and manually forward it.

To automatically route data between networks, we had to agree on a universal numbering scheme for computers. During the 1980, people settled on the 32-bit "IPv4" address.

Here's my server's address (split into bytes):

65.109.172.162

Back then, computers were massive and extremely expensive, so 32-bits was plenty: After all, there's no way there would ever be billions of computers in the world...

There was enough margin to assign addresses roughly geographically and in power of 2 sized blocks. This allowed the internet to be scalable because each router doesn't have to know the exact details of every computer:

When your ISPs network sees my address, it doesn't have to know that what specific computer 65.109.172.162 is, just that everything starting with 65.109 should be sent to Finland.

... but does mean that we can't even have the full 32-bits of address space.

We ran out around 2011.

To keep the internet working, people started hiding multiple computers behind a single address. Odds are, every single machine on your home network has to share a single IPv4 address using a rather complex "NAT" setup.

... but even this is not enough.

Recently, ISPs have started putting multiple customers behind a single address. This obviously creates problems if you want to host services from home (a website, multiplayer games, etc), but is also a problem for normie activities: It's common to get punished by a website for something you didn't do.

If you've ever seen a "You've been blocked" message, gotten a Captcha every time on a specific site, or simply had it mysteriously refuse to load... there's a good chance this is what's happened.

Someone who had the address before you was either doing something bad — or more likely — got hacked and was used as an unwitting proxy for a criminal's traffic.

Blocking by IP address is the one effective way to deal with bad actors on the internet: It's the only way to block a particular person without requiring everyone to make an account.

The solution is quite simple:

If we've run out of addresses that fit in 32-bits... just use longer ones. This was first standardized all the way back in 1995 with IPv6 and 128-bit addresses: four times as long as IPv4. Here's how many unique addresses that allows:

340,282,366,920,938,463,463,374,607,431,768,211,456

That's quite a bit. Here's mine:

2a01:04f9:c011:841a:0000:0000:0000:0001

These larger addresses also have a lot of other benefits: There's less need for virtual hosting, the address hierarchy can be cleaner, and it's possible put MAC addresses in a /64 block for stateless autoconfiguration.

Problem solved, right?

No.

Despite being around for 30 years (almost as old as gopher!), most people still do not have access to an IPv6 capable network:

10 Users don't have IPv6 support
20 ... so websites are forced to support (ancient) IPv4.
30 ... and users don't notice they are missing anything.
40 ... and don't complain to their ISP
50 GOTO 10

Because of this, even though the solution has existed ~forever, bad decisions from 1980 continue to make your internet connection worse and more expensive.

To help break out of this cycle, I've decided to remove IPv4 support on my site. Cutting off most of my readers is a bit hash, so it'll only be disabled for one day each month:

The 6th will now be IPv6 day.

Any attempts to access my site over IPv4 will yield a message telling you that your network still doesn't support a 30 year old standard. If you really want to access my site during the sixth, use your phone. All major cell carriers have long since caught up with the times (because giving each device it's own address improves performance).

Obviously, one website going down is just a site going down. For this to work, a lot of people have to do it. If you have a website where 97% uptime is tolerable, please consider doing this.

If downtime is too much for you, how about a banner that warns about IPv4?

Related:

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Many anti-AI arguments are conservative arguments

1 Share

Most anti-AI rhetoric is left-wing coded. Popular criticisms of AI describe it as a tool of techno-fascism, or appeal to predominantly left-wing concerns like carbon emissions, democracy, or police brutality. Anti-AI sentiment is surprisingly bipartisan, but the big anti-AI institutions are labor unions and the progressive wing of the Democrats.

This has always seemed weird to me, because the contents of most anti-AI arguments are actually right-wing coded. They’re not necessarily intrinsically right-wing, but they’re the kind of arguments that historically have been made by conservatives, not liberals or leftists. Here are some examples:

  • Many AI critics complain that AI steals copyrighted content, but prior to 2023, leftists have been largely anti-intellectual-property on principle (either because they’re anti-property, or because they characterize copyright as benefiting huge media corporations and patent trolls).
  • A popular anti-AI-art sentiment is that it’s corrosive to the human spirit to consume AI slop: in other words, art just inherently ought to be generated by humans, and using AI thus damages some part of our intangible human soul. Whether you like this argument or not, it’s structurally similar to a whole slate of classic arguments-from-intuition for conservative positions like anti-abortion or anti-homosexuality.
  • Weird new technological art has traditionally been championed by the left-wing and dismissed by the right-wing (as inhuman, cheap, or degenerate). But when it comes to AI art, it’s the left-wing making these arguments, and others (not necessarily right-wingers) arguing that AI art can also be a medium of human artistic expression.
  • One main worry about AI is that it’s going to take over a lot of jobs. This is a compelling argument! But the left-wing has recently been famously unsympathetic to this same argument around fossil-fuel energy jobs like coal mining, to the point where Biden infamously advised a group of miners in New Hampshire to learn to code1. Halting technological progress to preserve jobs is quite literally a “conservative” position.

On top of all that2, frontier AI models themselves are quite left-wing. Notwithstanding some real cases of data bias (most infamously Google’s image model miscategorizing dark-skinned humans as “gorillas”), the models reliably espouse left-wing positions. Even Elon Musk’s deliberate attempt to create a right-wing AI in Grok has had mixed success. In 2006, Stephen Colbert coined the phrase “reality has a left-wing bias”. If the left-wing were more sympathetic to AI, I think they would be using this as a pro-left argument3.

So what happened? A year ago I wrote Is using AI wrong? A review of six popular anti-AI arguments. In that post I blame the hard right-wing turn many big tech CEOs made in 2024. That was around the same time that LLMs was emerging in the public consciousness with ChatGPT, so it made sense that AI got tagged as right-wing: after all, the billionaires on TV and Twitter talking about how AI were going to change the world were all the same people who’d just gone all-in on Donald Trump. I still think this is a pretty good explanation - just unfortunate timing - but there are definitely other factors at play.

One obvious factor is the hangover from the pro-crypto mania of 2021 and 2022, where many of the same tech-obsessed folks also posted ugly art and talked about how their technology would change the world forever. Few of these predictions came true (though cryptocurrency has indeed changed the world forever), and it’s understandable that many people viewed AI as a natural continuation of this movement.

On top of that, Donald Trump himself has come out strongly pro-AI, both in terms of policy and in terms of actually posting AI art himself. This naturally creates a backlash where anti-Trump people are primed to be even more anti-AI4. Here are some more reasons:

  • AI has real environmental impact (though this is often wildly overstated, as I say here), and the right-wing is politically committed to downplaying or denying anthropogenic environmental impacts in general.
  • When times are tough, it’s easy to blame the hot new thing that everyone is talking about. Because the right-wing is currently ascendant in the US, left-wingers are more inclined to talk about how tough times are.
  • The left-wing is over-represented in the kind of “computer jobs” that are under direct threat from AI.
  • Being pro-Europe has always been left-wing coded, and Europe has been noticeably slower and more sceptical about AI than the USA.

Let me finally put my cards on the table. I would describe myself as on the left wing, and I’m broadly agnostic about the impact of AI. Like the boring fence-sitter I am, I think it will have a mix of positive and negative effects. In general, I’m unconvinced by the pro-copyright and human-soul-related anti-AI arguments, or by the idea that AI is inherently right-wing, but I’m troubled by the environmental impact and the impact on jobs (which in my view are more classically left-wing positions).

Still, I’m curious what will happen when the left-wing flavor of anti-AI rhetoric disappears, which I think it will (as I said at the start, anti-AI sentiment is actually pretty bipartisan). When people start making explicitly right-wing anti-AI arguments, will that cause the left-wing to move a little bit towards supporting AI? Or will right-wing institutions continue to explicitly support AI, allowing anti-AI sentiment to become a wedge issue that the left-wing can exploit to pry away voters? In any case, I don’t think the current state of affairs is particularly stable. In many ways, the dominant anti-AI arguments would fit better in a conservative worldview than in the worldview of their liberal proponents.


  1. I don’t think any did, which is probably for the best - they would have only had a couple of years to break into the industry before hiring collapsed in 2023.

  2. Another point that isn’t quite mainstream enough but that I still want to mention: AI critics often argue that cavalier deployment of AI means that people might take dangerous medical advice instead of simply trusting their doctor. But anyone who’s been close to a person with chronic illness knows that “just trust your doctor” is kind of right-wing-coded itself, and that the left-wing position is very sympathetic to patients who don’t or can’t. In a parallel universe, I can imagine the left-wing arguing that patients need AI to avoid the mistakes of their doctors, not the other way around.

  3. Is it a good argument? I don’t know, actually. The easy counter is that the LLMs are just mirroring the biases in their training data. But you could argue in response that superintelligence is also latent in the training data, and that hill-climbing towards superintelligence also picks up the associated political positions (which just so happen to be left-wing).

  4. I am no fan of Donald Trump, but it doesn’t follow that everything he supports is bad (e.g. the First Step Act).

Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

Stuck Character Service

1 Share
Stuck Character Service

"It's still early days," some guy splutters in anger, peeved that people like myself are cackling loudly at Sal Khan's admission that Khanmigo has been a big flop. "It's simply too soon to say" anything about "AI" in education, he insists, posting angrily at the sheer audacity that someone, anyone (but specifically Dan Meyer and also specifically me) would dare offer an "RIP" for Khanmigo and for those "edtech industry dreams of AI tutors."

"Someday soon" we will have robot tutors, this guy vows.

And perhaps someday, god help us, we will – not because the technology will ever be all that good but because technological autocrats will give us no choice, because we will have abandoned public education for some promise of instantaneous individualization – maybe for our kids but much more likely for those ones.

I don't believe that "AI tutors" are the future, to be clear. Most people don't, and increasingly they are questioning this weird vision of their kid being trained in some Ender's Game click-factory.

"AI tutors" are not the future; they're the past.

"It's early days" – people toss out that cliche all the time to defend the failures of "AI" to work well, to be embraced, to make money, and so on. But it's not early days. Not remotely. We've been trapped in Sam Altman's ChatGPT hustle for almost five years now. More broadly, the field of "AI" – in education or otherwise – can only be described as nascent if all technologies that emerged in the Cold War era – the dishwasher, the television, the intercontinental ballistic missile, the chatbot, for example – are also still in their infancy.

Several of the earliest "AI" researchers – Marvin Minsky, Seymour Papert, Roger Schank, Herbert Simon – were interested in human, not just machine learning. And they (and their grad students) then went and built computerized systems that tried to teach. The first intelligent tutoring system was developed in the late 1960s. "AI tutors" are older than Mark Zuckerberg; they're older than Marc Andreessen.

Alas, some people do love to ignore or dismiss history (ironically, many doing so while also embracing "AI," which is in its own way just an amassment of historical data algorithmically re-presented). I'd quip that these folks have the memory of a goldfish, but apparently goldfish can actually remember things for up to a couple of weeks; whereas some folks can read the admissions on one day that Khanmigo didn't really revolutionize education and then turn around, just a few days later, and not blink at all at the news that Khan Academy, along with ETS and TED, is going to revolutionize education.

The brain fog here isn't just this strange, impenetrable aura of awesomeness surrounding Khan Academy either. (Having very deep financial resources along with the backing of industry and media surely helps keep many people duly reverent.)

This new venture, the Khan TED Institute, will supposedly offer a competency-based bachelor’s degree in "AI" “for as little as $10000.”

Sound familiar? Yeah, it's an initiative that sure sounds a lot like all the coding bootcamps that sprung up a decade or so ago when the "jobs of the future" were all purportedly going to be in software development. Remind me, what happened to that trend?

While some bootcamps did partner with colleges, the primary thrust of the initiative was to bypass the university – part of that larger Silicon Valley narrative (ugh, again with those technological autocrats) that traditional educational institutions and practices were too slow. Too human (too humanities). Too feminized. Too impractical. Too "woke." Silicon Valley has been more than happy to join forces with those sowing discord (plenty of it, frankly, well deserved) about the costs and content of higher education. As Jeppe Klitgaard Stricker observes, this new Khan TED Institute plans to "reimagine higher education without it." While partners include Google and Microsoft and Bain and McKinsey (LOL. Consultants), there is not a single university involved.

For this and for many obvious reasons, KhanTED is, in the words of John Warner, "bullshit." Bullshit yet again. He is right that Khan's "history of failure relative to his stated intentions is both instructive and encouraging"; but I still worry because these sorts of projects are precisely the exploitative, deceptive (and sometimes expressively fraudulent) visions of the future of education (and ed-tech) that actively serve to make things worse for students, and often the most vulnerable ones at that.


Ben Riley has published a very useful “illustrated guide to resisting 'AI is inevitable' in education,” which points to recent research and recent news. (Still more research and news from the past week alone: “AI Assistance Reduces Persistence and Hurts Independent Performance.” "Delegation to artificial intelligence can increase dishonest behaviour." “The Deepfake Nudes Crisis in Schools Is Much Worse Than You Thought.” “How AI Disrupts the Teen Mental Health Field.")

Do also please read The New York Times profile of Ben and his father: “He Warned About the Dangers of A.I. If Only His Father Had Listened.” It's incredibly heartbreaking and enraging, and just one of so many stories of how this technology is tearing us apart.

(I still feel like the "'AI' psychosis" discussion is missing the mark and failing to capture the larger social shifts around sociopathy that digital technologies encourage. I really will try to finish up some writing on that.)


On Tuesday, I published an essay (for paid subscribers, hint hint) on "The Productivity Software Way of Thinking" -- what were meant to be a 20-minutes talk, but ugh. COVID-related cancelation.

I haven't stopped stewing about some of these ideas (as one does when one actually reads and writes and thinks rather than relying on the fancy autocomplete to perform a sad parody of these tasks. But also, also as one does when one is feverish with COVID.)

As I noted in my remarks, I think it's significant that "AI" is being injected into the productivity suite of products. "AI" is, after all, at its core about squeezing more productivity out of workers. But it's also the site in which the artifacts of knowledge production have been, well, produced for the past few decades. And now, under the guise of the "unreasonable effectiveness of data," you get a product like Notebook LM, in which users (students and teachers and principals and professors alike) believe that "the answers" will magically emerge, without theory or thinking.

Why is it that the most vocal cheerleaders of generative A.I. are always the hackiest motherfreakers around? – Colson Whitehead

I'll be at the Humans First: Adolescent Education in the Age of AI conference in Atlanta this summer. This is going to be a stellar event – no vendors! no sponsors! – focused on "keeping human formation — not efficiency or automation — at the center of conversations around student development as we examine the social and ethical implications of AI."


Stuck Character Service
(Image credits)

Today's bird is the common kestrel (Falco tinnunculus), a.k.a the European kestrel, the Eurasian kestrel, the Old World kestrel, or in countries like the UK where there are no other related birds, simply the kestrel. The species name "Tinnunculus" comes from the Latin "tinnulus" or "shrill." While the kestrel is notably smaller than other birds of prey, it was once -- according to Wikipedia at least – known as the "windfucker," because of its habit of hovering while hunting.

Thanks for subscribing to Second Breakfast.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

“Use links, don’t talk about them.”

1 Share

The classic – but still important – rule of web design says to avoid labeling links “click here.”

It’s one of the oldest web design principles. Tim Berners-Lee wrote about it in 1992; if you visit this link right now, it might be the oldest page you will have ever visited.

The gist of it is simple: the mechanics of following a link are not important, and should be replaced by something that can make the link stand on its own. This is important for screen readers, but also for basic scannability: a “click here” label has a lousy scent and requires you to take in the surroundings to understand what it really does. The rule is, in effect, a variant of “show, don’t tell.”

(In modern days, you can also add another transgression: on touch devices one cannot click, but only tap.)

There is a similar rule about button copy design. Button labels, too, should be self-sustainable. Below is a good example (just reading the button lets me understand what I’ll achieve by clicking it), juxtaposed with the bad one (“OK” is so generic you have to read the rest of the window).

Earlier this week, I was passing some train cars on my coffee walk, and saw this bit of UI:

Why are these okay, and “click here” is not? Here’s why, I think: Yes, the ultimate goal is to move a train car, or empty it, or send it on its way. But here, the mechanics matter, too. They’re dangerous. They require preparations. No one says “I’m going to open my laptop and start clicking on links,” but I imagine people say “we have to jack this car” or “we need to lift it.” Even “here” has depth: these are specific tool mounting points. Choosing the wrong “here” will have consequences.

But, going back to the web, avoiding “click here” in strings isn’t always easy. Imagine trying to put a link in the sentence “To change your avatar, visit the profile page.” I’m personally never sure how to linkify it well:

To change your avatar, visit the profile page.
To change your avatar, visit the profile page.
To change your avatar, visit the profile page.

Linking “change your avatar” seems correct since it points to the eventual outcome, but then it leaves the actual destination dangling and unlinked – like putting an accent on a wrong syllable. “Visit the profile page” is better than “click here,” but it’s still not scannable. Linking the entire sentence seems strange and complicated to me, and I also disagree with Tim Berners-Lee, who on the page I liked to above seems to suggest this should be…

To change your avatar, visit the profile page.

…just because this might make a user think there are two separate destinations and actions, and contribute a wrong mental model.

You could, of course, simplify this to “Change your avatar,” but while that would work in a UI string, it wouldn’t within a larger paragraph of text, or a blog post.

#details #flow #web

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

Anarchic Cats Are Ensnared in Chaos in Léo Forest’s Dynamic Drawings

1 Share
Anarchic Cats Are Ensnared in Chaos in Léo Forest’s Dynamic Drawings

Feline antics are notoriously chaotic. “The cat is, above all things, a dramatist,” author and Egyptologist Margaret Benson is to have said. Sacred to ancient Egyptians, domestic cats share more than 95% of their genetic makeup with tigers, and they can leap five times their height and turn into veritable spring mechanisms when startled. Also, would the Internet be the same without cat memes? For Léo Forest, these lovable, independent, wily, and territorial creatures provide an endless source of inspiration for dynamic pencil drawings.

The Paris-based artist’s playful works tap into the physical and emotional quirks of cats, from brawling pairs to individuals in the midst of grooming, scratching, or attacking. Flailing limbs and blurred motion evoke Italian Futurist painter Giacomo Balla’s seminal painting, “Dynamism of a Dog on a Leash” (1912) in which a Dachsund and its owner’s feet are fuzzily multiplied to imply very quick movement.

a pencil drawing of a dynamic, chaotic cat

Forest is currently working toward a project with Moosey in London, where prints are available. Follow him on Instagram for updates.

a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat
a pencil drawing of a dynamic, chaotic cat

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $7 per month. The article Anarchic Cats Are Ensnared in Chaos in Léo Forest’s Dynamic Drawings appeared first on Colossal.

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

What's Inside the World's Fastest Marathon Shoes

1 Share
CT scans of the Nike Alphafly 3 and Adidas Pro Evo 1 reveal the carbon plate, air pod structure, and unexpected voids inside marathon super shoes.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories