1614 stories
·
2 followers

Winners of the 2026 Kokuyo Design Awards

1 Share
The Kokuyo Design Awards, (previously) arguably Japan’s most-prestigious stationery design award, has been held for almost a quarter of a century now. Hosted by 120-year old stationery firm KOKUYO, the award receives close to 1500 entries each year for new products that have yet to be commercialized, with winning concepts given the opportunity to become […]
Read the whole story
mrmarchant
8 minutes ago
reply
Share this story
Delete

Not Normal

1 Share

A pair of broken off statue legs, shod in Roman sandals, atop a cliff. Behind them, we see a futuristic city.

This week on my podcast, I read Not Normal, my latest Locus Magazine column, about the surreal and terrible world we’ve been eased into thanks to anti-circumvention laws.


If you were paying attention in 1998, you could see what was coming. Computers were getting much cheaper, and much smaller. From cars to toast­ers, from speakers to TVs, we were shoveling them into our devices. and an it doesn’t take a lot of expense or engineering to add an “access control” to any of those computers.

That meant that DMCA 1201 was about to metastasize. Once you put a computer into a thermostat or a bassinet or a stovetop or a hearing aid, you can add an access control and make it a felony to use it in ways the manufac­turer disprefers. You can make it illegal to use cheap batteries, or a different app store. You can add little chips to parts – everything from a fuel pump to a touchscreen – and make it illegal to manufacture a working generic part, because the generic part has to bypass the “access control” in the device that checks to see whether it’s the manufacturer’s own part.

MP3

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

CBP facility codes sure seem to have leaked via online flashcards

1 Share

A user on Quizlet, an online learning platform, created a public flashcard set in February that appears to have exposed highly confidential information about security procedures in US Customs and Border Protection facilities around Kingsville, Texas.

The Quizlet set, titled “USBP Review,” was available to the public until March 20, when it was made private less than half an hour after WIRED messaged a phone number potentially linked to the Quizlet user. Though an individual with the user’s name was listed at an address of an apartment less than a mile from a Kingsville CBP facility, WIRED has not been able to verify that the flashcard set was created by an active CBP agent or contractor.

“This incident is being reviewed by CBP’s Office of Professional Responsibility,” a CBP spokesperson wrote in a statement to WIRED. “We will not be getting ahead of this review. A review should not be taken as an indication of wrongdoing.”

Read full article

Comments



Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

I Feel Like I’m Going Insane

2 Shares
I Feel Like I’m Going Insane

For every action, there must be an equal and opposite reaction. Sir Isaac Newton said this, presumably after a long day of scrolling whatever the social media platform of the time was. To wit, the moment I finished reading an excellent essay by Marisa Kabas about refusing to accept an AI-poisoned future of journalism, I encountered the following headline: “Esquire AI-Generated A Fake Interview With Live-Action One Piece Actor Mackenyu Because He Was Busy.” Either I’m losing my mind, or the world is. 

This isn’t one of those cases where a publication quietly leaned on AI and only copped to it after getting called out. Esquire Singapore fully admitted what it was doing when it published the story and used it as a supposed selling point! From the piece:

We were stoked to have some face time with the Japanese-American actor, but his schedule prevented it. So, we opted for e-mail correspondence. A list of queries was sent his way, and we waited. The silence continued until it was quickly replaced by a ticking clock as deadlines loomed.
We had the photospread, but nothing directly uttered by the 29-year-old. With a driving need for a feature, we had to be inventive. Harnessing our creative license, we pulled his verbatim from previous interviews and fed them through an AI programme to formulate new responses.
Are these the words we expect from Mackenyu? Or are they just replies from an echo chamber of celebrity-hood that we want to believe is from him?

Clearly the latter, you fools! You hacks! You credulous dipshits! 

This groundbreaking new approach to lying produced riveting exchanges like:

ESQ: Any advice on how to deal with pressure and expectations?
(AI) M: I separate pressure from weight. Pressure is external; like people's expectations. That I can't control, but the weight of family legacy… the goal isn’t to match my father. It's to make him proud, and maybe inspire someone else to do the same. Pressure can crush you, but weight can ground you.

And:

ESQ: What has fatherhood taught you?
(AI) M: That you can't rehearse it. (laughs) Everything else in my life I can prepare. Fatherhood has no script. No second take. You're just there, and you figure it out in real time. It's humbling in a way nothing else is.

You were talking to a chatbot! It did not laugh! Shut the fuck up! Also, as Kotaku notes, the chatbot certainly never knew Mackenyu’s father, deceased action star Sonny Chiba, and I cannot think of a single person in their right mind who would ask a predictive text generator a question so probing and personal about the feelings of a human being who’s very much still alive. That is deranged behavior! To be clear, I believe, on no uncertain terms, that the person who wrote this is deranged!

I cannot believe I even have to say this, but if you’re trying to publish an interview feature, and you’re unable to procure the interview in question, then you scrap the story. There is no “driving need” for a piece that supersedes that. The world was not crying out for this essential dollop of PR fluff. You can find interviews with Mackenyu, specifically, on numerous websites and, of course, YouTube. If anything, all Esquire has demonstrated here is that this kind of journalism matters so little that it can be farmed out to a robot homunculus and still pass muster.

It is bonkers to me that anyone thought this was a good idea—let alone that multiple people (if we include editors) presumably did. They should all hang their heads in shame forever, quit their jobs immediately, and give them to a few of the thousands of vastly more deserving reporters who, in a twist of fate that borders on maniacal, are currently out of work. These people would be better served casting away their old lives and embarking on a journey to find the actual One Piece, a treasure I’m well aware is fictional. Despite that rather substantial stumbling block, they would still find more success in that arena than in this one. 

This is what happens when AI rots journalists’ brains beyond the point that they can’t discern the difference between a good idea and a terrible one—to the point that they can only conceive of angles that involve AI.

That in mind, a salient section from Kabas’ piece:

If you want to work with machines, get a job that requires it. There are a whole lot more of those than there are writing jobs, so free up space for people who actually want to do the work. You’re not doing the world a favor by gifting it your human/AI hybrid. Journalism will not miss you if you leave. No one is making you be a journalist; it’s not one of those jobs parents force you to choose, like a doctor or a lawyer. Journalism, while romanticized in popular culture, is generally unglamorous and poorly paid, with progressively worse job opportunities (no thanks to AI.) I’m careful not to refer to it as a calling because that seems to excuse sacrificing mental health in service of craft, but I do believe that it’s a job that can’t be forced. It’s obvious to readers when your heart isn’t in it.
Story About AI (Being Mean) Gets Pulled Because Journalist Used AI (That Made Mistakes)
‘The irony of an AI reporter being tripped up by AI hallucination is not lost on me.’
I Feel Like I’m Going Insane
Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

AI chatbot use can hinder students’ knowledge retention

1 Share

Students who use AI tools extensively may struggle with knowledge retention, according to new research.

Brazilian social scientist Andre Barcaui looked at two groups of students, one using ChatGPT as a study aid and the other using more traditional methods, before giving them a surprise test after 45 days. He found that those who had depended on AI scored an average of 57.5 percent on a knowledge retention test, compared to an average of 68.5 percent for those who had studied traditionally.

This randomized controlled trial showed that unrestricted use of ChatGPT as a study aid can impair long-term knowledge retention, Barcaui said in the conclusion of his paper, ChatGPT as a cognitive crutch: Evidence from a randomized controlled trial on knowledge retention. “Students who learned without AI retained substantially more information after 45 days than those who used ChatGPT.” He pointed out that an 11 percent performance gap was a significant differential.

A survey of around 10,000 teachers by the British National Education Union found other deleterious effects of AI usage. It found that two-thirds of secondary-school teachers (66 per cent) thought pupils’ critical thinking has declined due to AI usage.

The research will dismay those AI enthusiasts who have been promoting AI as an essential tool in education, although some observers have noted that there’s a need to be circumspect as to how it’s being used.

Critics will point out knowledge retention is just one part of education and that in-depth knowledge of AI will be of more help in the workplace.



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Claude Code codebase is leaked

3 Shares

Security researcher Chaofan Shou tweeted yesterday morning that the source code for Anthropic’s Claude Code agent had leaked: [Twitter, archive]

Claude code source code has been leaked via a map file in their npm registry!

Anthropic included a source map — a debugging file — in the Claude Code NPM package. This can turn the minified code in the package back into the original source code.

The leak was 1,900 files with 512,000 lines of code.

Anthropic said: [Bloomberg, archive]

This was a release packaging issue caused by human error, not a security breach.

The AI cannot fail — it can only be failed. Human errors can also be security breaches. If your release system makes this sort of error possible at all, that’s on you.

But then, Anthropic vibe coded this system. What do we expect.

Everyone’s been looking through this code dump to see what it does. Duke of Germany on Mastodon said: [Mastodon]

After looking at the code, my understanding of how Claude works: “Throw insane amounts of compute at some developer fan fiction and hope for the best.” Did I get that right?

Yep, that’s about right. There’s bits of actual code in there. But most of it is prompts that plead with the bot not to screw it up this time.

Claude Code’s creator, Boris Cherny from Anthropic, tweeted last month that Claude code is vibe coded: [Twitter, archive]

Can confirm Claude Code is 100% written by Claude Code.

That puts Claude Code’s copyright status in serious doubt. You cannot copyright AI output in the US.

Anthropic is sending DMCA notices to get copies of the repository taken down. Claiming copyright on uncopyrightable material is fraudulent, and it’s perjury if you do it in a DMCA notice. If you get one of these, you might want to counterclaim accordingly. [WSJ, archive]

Also, whatever code the chatbot originally stole from is likely under a variety of other licenses. So Anthropic may have violated those copyrights.

Of course, a pile of free vibe code is worth less than zero as code. The only use for this pile is working out what nonsense Anthropic thinks is production machinery.

  • There’s an instruction not to write any security holes. I’m sure that works great.
  • You can’t use Claude Code to write hacking tools! Unless you tell it you’re a security researcher. Then it’s happy to help.
  • There’s an “undercover” mode, which you use when you want to send slop to a public project without them realising you’re using a bot. This is specifically for use against public projects. Anthropic knows what they’re doing here. This is reason for projects that bar AI to bar all Anthropic employees.

Claude Code sends all your stuff to Anthropic: [Register]

“I don’t think people realize that every single file Claude looks at gets saved and uploaded to Anthropic,” the researcher “Antlers” told us. “If it’s seen a file on your device, Anthropic has a copy.”

Can you take this code leak and run Claude Code locally, without paying Anthropic? Sure, just point it at a local model instead of the Claude API. It’ll be super-slow unless you spend enough money to match the performance of the Claude API. But I’m sure there are a lot of people who are trying just that thing right now.

In the past few months, we’ve seen a slew of formerly respected software engineers who try the bot, and it one-shots them, and they start posting 2000-word tweets about how awesome Claude Code is, it’s the future of coding, don’t be left behind! And they never show you testable numbers or anything. Trust me, bro.

People who’ve been forced to touch Claude Code at work tell me it’s noticeably more sycophantic than older models. Claude Code really wants to make you feel good about vibe coding.

But also, Claude Code is leaning hard into gambling addiction — the “Hooked” model. You reward the user with an intermittent, variable reward. This keeps them coming back in the hope of the big win. And it turns them into gambling addicts.

Jonny from Neuromatch describes how Claude Code works, looking at the codebase: [Mastodon]

This is an important feature of the gambling addiction formulation of these tools: only the margin matters, the last generation … The intermediate comments from the LLM where it discovers prior structure and boldly decides to forge ahead brand new are also part of the reward cycle: we are going up, forever. Cleaning up after ourselves is down there.

Jonny compares Claude Code to exploitative pay-to-win mobile games. Addiction loops. Anthropic’s gamified vibe coding.

Claude Code is expensive Candy Crush, but it tells you you’re being productive. As it teaches you to forget how to code. Just keep paying Anthropic.

Remember: every day is AI Fool’s Day.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories