1203 stories
·
1 follower

Contra Machinam: An Appeal for an AI Resistance

1 Share

I observe and interact with humans between the ages of eighteen and twenty-five almost every day of my life, as part of my job. Like many, when AI chatbots were maliciously (indiscriminately is too kind of a word) released into the wild several years ago, I watched in horror—the kind of horror one might feel watching a pod of orcas play with a baby seal before ripping it to shreds. Only in this case the seal believes itself to be the one having all the fun. That young people were attracted to an openly available technological wonder is not surprising, though it was disconcerting to witness. What was truly shocking to me in that first year—and still is—was the almost complete lack of resistance by those charged with educating these young souls. Half-hearted appeals for deliberate discernment of the adoption of AI into higher education quickly dissipated and joined the chorus of administrators recommending “responsible use,” by this very phrase unintentionally treating AI as a harmful substance, yet one we can’t expect young people to abstain from using.

My own view from the beginning of this new era of machine domination has been that tools of such power require their users to have an extremely high degree of maturity and responsibility. But the whole growth model of corporate AI tools requires as much input and interaction as possible from as many users as possible. Thus, it came to be that perhaps the most powerful tool ever created by humanity fell into the untried hands of a generation known for, among other things, eating Tide pods. Not that all the blame for the irresponsible use of AI falls on younger generations. Aspirations to transgress the bounds of natural limitations have plagued humanity from the beginning. “Come, let us build for ourselves a city, and a tower with its top in the heavens, and let us make a name for ourselves,” Babel’s vision-casting team said. The builders and sellers of AI—futurists, pioneers, entrepreneurs, and engineers, along with their funders—will be the most culpable for harm caused, especially if products are designed to entice and addict.

In the many conversations I have had on this topic over the past three years, most of my interlocutors have taken either a neutral or a positive stance toward AI, though of late I have seen some who were previously unconcerned begin to grow wary. I find that the majority of those I speak with on the topic, however, have given little thought to the potential downsides of AI. Few have made the effort to subject their use of AI to moral scrutiny, and even fewer have dared to ask whether any and all use of AI may implicate the user in some moral evil. Yet there are a multitude of reasons not merely to approach AI with caution but to engage in determined opposition to it. In what follows I offer a handful of what I take as the weightier among these reasons, and I invite those who read this to consider joining the resistance.

The most common reactions I hear in response to my rejection of AI have to do with all the real or imagined goods AI might help bring about: medical advancement, all manner of scientific research, eliminating dull and time-consuming tasks in human work, and so on. Virtually everyone acknowledges the risks of AI and the ways it can be used for harm rather than good, but these are taken as the price that must be paid to attain the good. AI is simply a tool, they say, and like all tools it can be used for good or for ill. It would be ludicrous to get rid of the tool just because some bad actors abuse it. This line of thought fails in at least two ways.

First of all, it fails to recognize a distinction between moral evil and natural evil. Natural evils are things we deem bad—usually due to the loss of life—but for which we are not culpable: natural disasters, diseases, etc. Moral evils are things we deem bad and hold people responsible for. Moral evils are always more grave for the souls involved than natural evils, even if the scale seems smaller. AI may help attenuate natural evils by finding cures for diseases or drastically improving disaster prediction, for example. But the capability and availability of AI have already drastically increased moral evil. This is not a net gain for humanity. To put it bluntly, it is better to suffer natural evils and possess virtue than to eradicate natural evils and lack virtue. Ends never justify means.

Second, although AI can be thought of as a tool, it is unlike any other tool due to its ability to “self-improve” based on human input and interaction along with ever-expanding data sets. If a person uses a hammer to smash the skull of an innocent other, this is clearly a case of a tool being misused. But the hammer undergoes no change; through the act of someone using it to commit murder, the hammer itself does not become a better weapon. With AI this is not the case. As more and more people use AI tools, the tools themselves change and get better at doing what their users ask. Which means even supposedly harmless uses of AI actively contribute to the tools becoming better at bringing ill will to fruition. Millions of users, for example, creating relatively innocuous fake videos for work or for fun contribute to the tool becoming better at making realistic child pornography. Again, this is not a tradeoff we should be willing to tolerate.

In addition to these moral questions pertaining to the very existence and use of AI, there are moral concerns pertaining to the peripheries of the AI complex. A prime example of this is the investments being poured into creating an AI-integrated world. Hundreds of billions of dollars have already been spent on developing and deploying AI tools, and there are only signs that this will increase in the near future. If achieving an AI-integrated world were a matter of meeting critical needs, perhaps such expenditures would be justified. But there are no critical needs in the world that can only be addressed with AI. Poverty, violence, famine, loneliness, lack of healthcare, environmental collapse—these are needs which, in order to be met, require human and financial resources, not AI. Because these critical needs have not been met, it is unjust to direct such vast resources toward something which, though technologically revolutionary, is quite frivolous in relation to the realities of human existence. We do not need AI to flourish as humans. We do need peace, healthy communities, clean water, fertile soil, and a dependable food supply, all of which AI cannot supply.

The environmental cost of AI is something to consider as well, working on the assumption that keeping this planet perpetually fruitful and safe to inhabit is important for humanity. Massive data centers are required to keep AI running, along with cloud computing, streaming, and a host of other digital commodities, and each data center requires a massive amount of electrical and water supply, not to mention actual land usage. Data centers collectively are becoming one of the leading emitters of carbon through electrical consumption, and newer “hyperscale” centers are expected to require more water than current ones. On a planet that was already facing environmental crises before the rise of AI, surely AI’s energy, water, mined material, and land demands are going to make it even more difficult to address these crises, especially since AI seems to have quickly been deemed a necessity by corporate and political powers. Care for our common home ought to at least cause us to question whether this is the best path forward for humanity.

There are a host of other reasons to reject the use of AI outright, such as the minor issue of a possible machine-assisted human extinction event, but I will offer one final thought. The use of AI degrades the value of human work and so of humans themselves. Those who tout the benefits of AI, like most advocates of machine technology, view natural human limitations as weaknesses to be transcended if possible. This attitude is of course a derivative of an economic order built on hyper consumption. In both production and consumption, more and faster is always better. Since the introduction of machines greatly enhances production, the goodness of machines is not to be questioned. But the unmentioned corollary of this is that the productive value of a mere human without a machine is reduced to almost nothing. Being human is not enough. You and I, considered in ourselves, are of insufficient value in the minds of those pushing for an AI-integrated world, unless we amend our humanity with machines.

The effect of this in the agricultural realm is widely recognized. Even though the only things required to make land productive are human care, human or animal power, and water, a farm without machines is a near impossibility today. And the idea of farm work being done without machines is seen as something rightly relegated to some past peasantry. This same degradation of work done at an actual human scale by actual humans will come to every sector that adopts AI. Productivity will go up, value and diversity will go down, and the monoculture dullness that one sees flying over the machine-leveled fields of middle America, with its ecological and cultural costs, will characterize much of society and the economy. Yes, you will be able to escape from this ugliness into virtual worlds. But I, for one, would prefer to enjoy the beauty of the real world—the beauty that is the ordered harmony of great variety; the beauty that must be cultivated, preserved, and fought for; the beauty that can only exist when natural limits are respected; the beauty that came under threat after the industrial revolution; the beauty that could very well disappear after the AI revolution. Unless we resist.

Image Credit.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Making RSS More Fun

1 Share

I don't like RSS readers. I know, this is blasphemous especially on a website where I'm actively encouraging you to subscribe through RSS. As someone writing stuff, RSS is great for me. I don't have to think about it, the requests are pretty light weight, I don't need to think about your personal data or what client you are using. So as a protocol RSS is great, no notes.

However as something I'm going to consume, it's frankly a giant chore. I feel pressured by RSS readers, where there is this endlessly growing backlog of things I haven't read. I rarely want to read all of a websites content from beginning to end, instead I like to jump between them. I also don't really care if the content is chronological, like an old post about something interesting isn't less compelling to me than a newer post.

What I want, as a user experience, is something akin to TikTok. The whole appeal of TikTok, for those who haven't wasted hours of their lives on it, is that I get served content based on an algorithm that determines what I might think is useful or fun. However what I would like is to go through content from random small websites. I want to sit somewhere and passively consume random small creators content, then upvote some of that content and the service should show that more often to other users. That's it. No advertising, no collecting tons of user data about me, just a very simple "I have 15 minutes to kill before the next meeting, show me some random stuff."

In this case the "algorithm" is pretty simple: if more people like a thing, more people see it. But with Google on its way to replacing search results with LLM generated content, I just wanted to have something that let me play around with the small web the way that I used to.

There actually used to be a service like this called StumbleUpon which was more focused on pushing users towards popular sites. It has been taken down, presumably because there was no money in a browser plugin that sent users to other websites whose advertising you didn't control.

TL;DR

You can go download the Firefox extension now and try this out and skip the rest of this if you want. https://timewasterpro.xyz/ If you hate it or find problems, let me know on Mastodon. https://c.im/@matdevdug

Functionality

So I wanted to do something pretty basic. You hit a button, get served a new website. If you like the website, upvote it, otherwise downvote it. If you think it has objectionable content then hit report. You have to make an account (because I couldn't think of another way to do it) and then if you submit links and other people like it, you climb a Leaderboard.

On the backend I want to (very slowly so I don't cost anyone a bunch of money) crawl a bunch of RSS feeds, stick the pages in a database and then serve them up to users. Then I want to track what sites get upvotes and return those more often to other users so that "high quality" content shows up more often. "High quality" would be defined by the community or just me if I'm the only user.

It's pretty basic stuff, most of it copied from tutorials scattered around the Internet. However I really want to drive home to users that this is not a Serious Thing. I'm not a company, this isn't a new social media network, there are no plans to "grow" this concept beyond the original idea unless people smarter than me ping with me ideas. So I found this amazing CSS library: https://sakofchit.github.io/system.css/

The Apple's System OS design from the late-80s to the early 90s was one of my personal favorites and I think would send a strong signal to a user that this is not a professional, modern service.

alt

Great, the basic layout works. Let's move on!

Backend

So I ended up doing FastAPI because it's very easy to write. I didn't want to spend a ton of time writing the API because I doubt I nailed the API design on the first round. I use sqlalchemy for the database. The basic API layout is as follows:

  • admin - mostly just generating read-only reports of like "how many websites are there"
  • leaderboard - So this is my first attempt at trying to get users involved. Submit a website that other people like? Get points, climb leaderboard.

The source for the RSS feeds came from the (very cool) Kagi small web Github. https://github.com/kagisearch/smallweb. Basically I assume that websites that have submitted their RSS feeds here are cool with me (very rarely) checking for new posts and adding them to my database. If you want the same thing as this does, but as an iFrame, that's the Kagi small web service.

The scraping work is straightforward. We make a background worker, they grab 5 feeds every 600 seconds, they check for new content on each feed and then wait until the 600 seconds has elapsed to grab 5 more from the smallweb list of RSS feeds. Since we have a lot of feeds, this ends up look like we're checking for new content less than once a day which is the interval that I want.

Then we write it out to a sqlite database and basically track "has this URL been reported", if so, put it into a review queue and then how many times this URL has been liked or disliked. I considered a "real" database but honestly sqlite is getting more and more scalable every day and its impossible to beat the immediate start up and functionality. Plus very easy to back up to encrypted object storage which is super nice for a hobby project where you might wipe the prod database at any moment.

In terms of user onboarding I ended up doing the "make an account with an email, I send a link to verify the email". I actually hate this flow and I don't really want to know a users email. I never need to contact you and there's not a lot associated with your account, which makes this especially silly. I have a ton of email addresses and no real "purpose" in having them. I'd switch to Login with Apple, which is great from a security perspective but not everybody has an Apple ID.

I also did a passkey version, which worked fine but the OSS passkey handling was pretty rough still and most people seem to be using a commercial service that handled the "do you have the passkey? Great, if not, fall back to email" flow. I don't really want to do a big commercial login service for a hobby application.

Auth is a JWT, which actually was a pain and I regret doing it. I don't know why I keep reaching for JWTs, they're a bad user experience and I should stop.

Can I just have the source code?

I'm more than happy to release the source code once I feel like the product is in a somewhat stable shape. I'm still ripping down and rewriting relatively large chunks of it as I find weird behavior I don't like or just decide to do things a different way.

In the end it does seem to do whats on the label. We have over 600,000 individual pages indexed.

So how is it to use?

Honestly I've been pretty pleased. But there are some problems.

First I couldn't find a reliable way of switching the keyboard shortcuts to be Mac/Windows specific. I found some options for querying platform but they didn't seem to work, so I ended up just hardcoding them as Alt which is not great.

The other issue is that when you are making an extension, you spend a long time working with these manifests.json. The specific part I really wasn't sure about was:

"browser_specific_settings": {
    "gecko": {
      "id": "admin@timewasterpro.xyz",
      "strict_min_version": "80.0",
      "data_collection_permissions": {
        "required": ["authenticationInfo"]
      }
    }
  }

I'm not entirely sure if that's all I'm doing? I think so from reading the docs.

Anyway I built this mostly for me. I have no idea if anybody else will enjoy it. But if you are bored I encourage you to give it a try. It should be pretty light weight and straight-forward if you crack open the extension and look at it. I'm not loading any analytics into the extension so basically until people complain about it, I don't really know if its going well or not.

Future stuff

  • I need to sort stuff into categories so that you get more stuff in genres you like. I don't 100% know how to do that, maybe there is a way to scan a website to determine the "types" of content that is on there with machine learning? I'm still looking into it.
  • There's a lot of junk in there. I think if we reach a certain number of downvotes I might put it into a special "queue".
  • I want to ensure new users see the "best stuff" early on but there isn't enough data to determine "best vs worst".
  • I wish there were more independent photography and science websites. Also more crafts. That's not really a "future thing", just me putting a hope out into the universe. Non-technical beta testers get overwhelmed by technical content.
Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

disable-javascript.org

1 Share

With several posts on this website attracting significant views in the last few months I had come across plenty of feedback on the tab gimmick implemented last quarter. While the replies that I came across on platforms like the Fediverse and Bluesky were lighthearted and oftentimes with humor, the visitors coming from traditional link aggregators sadly weren’t as amused about it. Obviously a large majority of people disagreeing with the core message behind this prank appear to be web developers, who’s very existence quite literally depends on JavaScript, and who didn’t hold back to express their anger in the comment sections as well as through direct emails. Unfortunately, most commenters are missing the point.

This website when you send it to the background with JS enabled
This website when you send it to the background with JS enabled

Missing the point

This email exchange is just one example of feedback that completely misses the point:

I just found it a bit hilarious that your site makes notes about ditching and disable Javascript, and yet Google explicitly requires it for the YouTube embeds.

Feels weird.

Regards.

The email contained the following attachment:

Given the lack of context I assume that the author was referring to the YouTube embeds on this website (e.g. on the keyboard page).

Here is my reply:

Hey there,

Simply click the link on the video box that says “Try watching this video on www.youtube.com” and you should be directed to YouTube (or a frontend of your choosing with LibRedirect [1]) where you can watch it.

Sadly, I don’t have the influence to convince YouTube to make their video embeds working without JavaScript enabled. ;-) However, if more people would disable JavaScript by default, maybe there would be a higher incentive for server-side-rendering and video embeds would at the very least show a thumbnail of the video (which YouTube could easily do, from a technical point of view).

Kind regards!

[1]: https://libredirect.github.io

Read before you write

It also appears that many of the people disliking the feature didn’t care to properly read the highlighted part of the popover that says “Turn JavaScript off, now, and only allow it on websites you trust!”:

Indeed - and the author goes on to show a screenshot of Google Trends which, I’m sure, won’t work without JavaScript turned on.

This comment perfectly encapsulates the flawed rhetoric. Google Trends (like YouTube in the previous example) is a website that is unlikely to exploit 0-days in your JavaScript engine, or at least that’s the general consensus.

However, when you clicked on a link that looks like someone typed it in by putting their head on the keyboard, that led you to a website you obviously didn’t know beforehand, it’s a different story.

What I’m advocating for is to have JavaScript disabled by default for everything unknown to you, and only enable it for websites that you know and trust. Not only is this approach going to protect you from jump-scares, regardless whether that’s a changing tab title, a popup, or an actual exploit, but it will hopefully pivot the thinking of particularly web developers back from “Let’s render the whole page using JavaScript and display nothing if it’s disabled” towards “Let’s make the page as functional as possible without the use of JavaScript and only sprinkle it on top as a way to make the experience better for anyone who choses to enable it”.

It is mind boggling how this simple take is perceived as militant techno-minimalism and can provoke such salty feedback. I keep wondering whether these are the same people that consider curl ... | sh to be a generally okay way to install software …?

The hacker ethos

One of the many commenters that however did agree with the approach that I’m taking on this site had put it fairly nicely:

About as annoying as your friend who bumped key’ed his way into your flat in 5 seconds waiting for you in the living room. Or the protest blocking the highway making you late for work.

Many people don’t realize that JavaScript means running arbitrary untrusted code on your machine. […]

Maybe the hacker ethos has changed, but I for one miss the days of small pranks and nudges to illustrate security flaws, instead of ransomware and exploits for cash. A gentle reminder that we can all do better, and the world isn’t always all that friendly.

As the author of this comment correctly hints, the hacker ethos has in fact changed. My guess is that only a tiny fraction of the people that are actively commenting on platforms like Hacker News or Reddit these days know about, let’s say, cDc’s Back Orifice, the BOFH stories, bash.org, and all the kickme.to/* links that would trigger a disconnect in AOL’s dialup desktop software. Hence, the understanding about how far pranks in the 90s and early 2000s really went simply isn’t there. And with most things these days required to be politically correct, having the tab change to what looks like a Google image search for “sam bankman-fried nudes” is therefor frowned upon by many, even when the reason behind it is to inform.

Frankly, it seems that conformism has eaten not only the internet, but to an extent the whole world, when an opinion that goes ever so slightly against the status quo is labelled as some sort of extreme view. To feel even just a “tiny bit violated by” something as mundane as a changing text and icon in the browser’s tab bar seems absurd, especially when it is you that allowed my website to run arbitrary code on your computer!

Doubling down on it

Because I’m convinced that a principled stance against the insanity that is the modern web is necessary, I am doubling down on this effort by making it an actual initiative:

disable-javascript.org

disable-javascript.org is a website that informs the average user about some of the most severe issues affecting the JavaScript ecosystem and browsers/users all over the world, and explains in simple terms how to disable JavaScript in various browsers and only enable it for specific, trusted websites.

The site is linked on the JavaScript popover that appears on this website, so that visitors aren’t only pranked into hopefully disabling JavaScript, but can also easily find out how to do so. disable-javascript.org offers a JavaScript-snippet that is almost identical to the one in use by this website, in case you would like to participate in the cause. Of course, you can as well simply link to disable-javascript.org from anywhere on your website to show your support.

If you’d like to contribute to the initiative by extending the website with valuable info, you can do so through its Git repository. Feel free to open pull-requests with the updates that you would like to see on disable-javascript.org. :-)

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Who Makes a Higher Salary – and the Jobs They Work

1 Share

Is your salary high, low, or somewhere in the middle? What do people with higher salaries than you do for a living? Doctors, lawyers, and tech workers come to mind, but there is a wider range of occupations, depending on the income level you’re comparing against.

Read More

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

This Is the Math at the Heart of Reality

1 Share

Mathematics is not only an esoteric vocation but also indispensably alive and deeply human

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Give Us One Manual For Normies, Another For Hackers

1 Share

We’ve all been there. You’ve found a beautiful piece of older hardware at the thrift store, and bought it for a song. You rush it home, eager to tinker, but you soon find it’s just not working. You open it up to attempt a repair, but you could really use some information on what you’re looking at and how to enter service mode. Only… a Google search turns up nothing but dodgy websites offering blurry PDFs for entirely the wrong model, and you’re out of luck.

These days, when you buy an appliance, the best documentation you can expect is a Quick Start guide and a warranty card you’ll never use. Manufacturers simply don’t want to give you real information, because they think the average consumer will get scared and confused. I think they can do better. I’m demanding a new two-tier documentation system—the basics for the normies, and real manuals for the tech heads out there.

Give Us The Goods

Once upon a time, appliances came with real manuals and real documentation. You could buy a radio that came with a full list of valves that were used inside, while telephones used to come with printed circuit diagrams right inside the case. But then the world changed, and a new phrase became a common sight on consumer goods—”NO USER SERVICABLE PARTS INSIDE.” No more was the end user considered qualified or able to peek within the case of the hardware they’d bought. They were fools who could barely be trusted to turn the thing on and work it properly, let alone intervene in the event something needed attention.

This attitude has only grown over the years. As our devices have become ever more complex, the documentation delivered with them has shrunk to almost non-existent proportions. Where a Sony television manual from the 1980s contained a complete schematic of the whole set, a modern smartphone might only include a QR code linking to basic setup instructions on a website online. It’s all part of an effort by companies to protect the consumer from themselves, because they surely can’t be trusted with the arcane knowledge of what goes on inside a modern device.

This Sony tv manual from 1985 contained the complete electrical schematics for the set.
byu/a_seventh_knot inmildlyinteresting

This sort of intensely technical documentation was the norm just a few decades ago.

Some vintage appliances used to actually have the schematic printed inside the case for easy servicing. Credit: British Post Office

It’s understandable, to a degree. When a non-technical person buys a television, they really just need to know how to plug it in and hook it up to an aerial. With the ongoing decline in literacy rates, it’s perhaps a smart move by companies to not include any further information than that. Long words and technical information would just make it harder for these customers to figure out how to use the TV in the first place, and they might instead choose a brand that offers simpler documentation.

This doesn’t feel fair for the power user set. There are many of us who want to know how to change our television’s color mode, how to tinker with the motion smoothing settings, and how to enter deeper service modes when something seems awry. And yet, that information is kept from us quite intentionally. Often, it’s only accessible in service manuals that are only made available through obscure channels to selected people authorised by OEMs.

Two Tiers, Please

Finding old service manuals can be a crapshoot, but sometimes you get lucky with popular models. Credit: Google via screenshot

I don’t think it has to be this way. I think it’s perfectly fine for manufacturers to include simple, easy-to-follow instructions with consumer goods. However, I don’t think that should preclude them from also offering detailed technical manuals for those users that want and need them. I think, in fact, that these should be readily available as a matter of course.

Call it a “superuser manual,” and have it only available via a QR code in the back of the basic, regular documentation. Call it an “Advanced Technical Supplement” or a “Calibration And Maintenance Appendix.” Whatever jargon scares off the normies so they don’t accidentally come across it and then complain to tech support that they don’t know why their user interface is now only displaying garbled arcane runes. It can be a little hard to find, but at the end of the day, it should be a simple PDF that can be downloaded without a lot of hurdles or paywalls.

I’m not expecting manufacturers to go back to giving us full schematics for everything. It would be nice, but realistically it’s probably overkill. You can just imagine what that would like for a modern smartphone or even just a garden variety automobile in 2025. However, I think it’s pretty reasonable to expect something better than the bare basics of how to interact with the software and such. The techier manuals should, at a minimum, indicate how to do things like execute a full reset, enter any service modes, and indicate how the device is  to be safely assembled and disassembled should one wish to execute repairs.

Of course, this won’t help those of us repairing older gear from the 90s and beyond. If you want to fix that old S-VHS camcorder from 1995, you’re still going to have to go to some weird website and risk your credit card details over a $30 charge for a service manual that might cover your problem. But it would be a great help for any new gear moving forward. Forums died years ago, so we can no longer Google for a post from some old retired tech who remembers the secret key combination to enter the service menu. We need that stuff hosted on manufacturer websites so we can get it in five minutes instead of five hours of strenuous research.

Will any manufacturers actually listen to this demand? Probably, no. This sort of change needs to happen at a higher level. Perhaps the right to repair movement and some boisterous EU legislation could make it happen. After all, there is an increasing clamour for users to have more rights over the hardware and appliances they pay for. If and when it happens, I will be cheering when the first manuals for techies become available. Heaven knows we deserve them!

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete
Next Page of Stories