1488 stories
·
2 followers

When Technology Replaces Teaching

1 Share

I was on the Teachers’ Aid podcast talking about mini whiteboards with some thoughtful folks. You can listen here.

I recently cut out student-facing technology from my teaching. Students do not use their Chromebooks at all in my class. I wrote about a bunch of the details a few weeks ago. Today I have a broader reflection on technology and teaching.

A Mental Model

Here’s a mental model implied by a lot of discourse I see about classroom technology.

The teacher makes decisions about what technology to use. The technology maybe helps students learn, maybe not, and maybe has some negative side effects on attention, etc. In this mental model, the focus is the arrow on the right. Education technology made all sorts of promises in the last ten years. Most haven’t panned out. Screens were supposed to help students learn but didn’t live up to that promise. I’ve written about some of that here, and also here.

In this model, the problems with technology are solvable. Maybe we need better education technology. Maybe we need to use it differently. That was my approach in my classroom before I went tech-free. We spent most of our time doing math with paper and pencil or whiteboard and marker. I used technology for a few specific purposes, and felt like I was getting some of the benefits without the drawbacks. I could fiddle with the technology I chose, gradually technology got better, and maybe eventually we’ll reach the promised land of tech-driven learning utopia.

Here’s a different mental model.

In this model, the technology has those same effects on students. But it also has an effect on the teacher. Technology isn’t neutral. When I have students pull out Chromebooks, technology changes my behavior.

When Technology Replaces Teaching

Never trust anything that can think for itself if you can’t see where it keeps its brain. - Arthur Weasley

Go talk to some regular students and regular parents about regular schools. You’ll often hear one theme: some of the teachers don’t teach.

Day after day, students show up to class, the teacher says, “open your Chromebooks and start lesson 3,” and the teacher watches the students work, or something along those lines.

Arthur Weasley warned us about this in Harry Potter and the Chamber of Secrets. He was talking about Ginny spilling all of her secrets to an enchanted diary, but it’s the same idea. When an object seems smart, humans tend to trust it to make decisions for us. We live in the era of “artificial intelligence.” We are constantly told that the singularity is near, that the computers are smart enough to take our jobs. So it makes perfect sense that teachers, when students open up those Chromebooks, kindof go on autopilot. The job shifts: rather than driving instruction, the teacher’s job is to assign the lesson through some software, and then to supervise the students and remind them to keep working.1

I think teachers stop teaching for a few different reasons. It’s in part a product of some vaguely progressive ideas that are in the water. Teachers should be a guide on the side, not a sage on the stage. Students should be doing the cognitive heavy lifting. With that mindset, it’s easy for the teacher to step back and let the computers do the work. This also happens because we can’t see where the computer or app or whatever keeps its brain, and we assume these things are smarter than they are.

And to be clear, there have always been some teachers who don’t teach. Before classroom technology it happened with packets.

My hypothesis is that classroom technology encourages this type of teaching (or, more accurately, not teaching). I’m not immune to this. I could feel the pull when I had students got out their Chromebooks. Teaching is tiring. I get students started, I go to my desk, and I watch little icons move through the lesson or percentages tick upward as students work. It seems like learning is happening. Maybe I feel like I need to monitor those dashboards, or make sure students are on the right website. There’s this gravity pulling me to stare at my screen as students stare at theirs. That gravity is powerful. That’s the biggest thing I learned from my tech-free experiment. I am not immune to the pull. Technology is my gateway drug to not teaching.

I surveyed my students after the tech-free experiment. One question was about how technology helps students learn. A student’s response:

Because it’s AI and it’s smart.

That type of thinking is everywhere right now. We assume technology is smart. We assume we should defer to the decision-making of the machines. I reject that. I am keeping technology out of my classroom in part for practical reasons: attention fragmentation, logistical headaches, and more. But I am also keeping technology out because of a moral conviction I’ve arrived at: technology is not a neutral tool. Technology changes teacher behavior. I want that out of my classroom as much as possible.

To Summarize

Here’s my summary of the education technology landscape right now:

There are already some interesting use cases for AI to reduce teacher workload. I’m sure more are coming. That’s cool! On the student side, there has never been a better time to be a self-motivated learner. If you want to learn something and you can manage your own time and effort, the current technological resources are fantastic. One reason we invented school is that not all young people are self-motivated. Many of those young people benefit from a regular routine, going somewhere with a consistent schedule and a bunch of peers of more or less the same age who learn the same things. In that context, classroom technology is at best a supplement to the human-to-human interaction that drives learning. In the classroom context, technology also affects teacher decisions and can convince teachers they don’t need to teach, or just make it easy to sit back and do a bit less. I’m writing this from personal experience — I’ve been that teacher!

Look, I read all the same headlines everyone else does. We’re on the cusp of a world-changing transformation driven by AI. Maybe! But edtech is not there. Call me when it happens. I will look at all the new technology that comes out. I’ll take it seriously. In the meantime, looking at what I have access to today, it’s not for me.

1

One practical note: reminding students to keep working is an unfortunate but inevitable side effect of using student-facing technology in the classroom. It’s much easier for students to “hide” behind a screen and look like they’re busy without doing much of anything, or to find any number of ways to distract themselves from doing the academic work in front of them. That means a lot of reminders, and a lot of teacher energy consumed with reminding students and not thinking about student learning.



Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete

The one science reform we can all agree on, but we're too cowardly to do

1 Share
photo cred: my dad

If you ever want a good laugh, ask an academic to explain what they get paid to do, and who pays them to do it.

In STEM fields, it works like this: the university pays you to teach, but unless you’re at a liberal arts college, you don’t actually get promoted or recognized for your teaching. Instead, you get promoted and recognized for your research, which the university does not generally pay you for. You have to ask someone else to provide that part of your salary, and in the US, that someone else is usually the federal government. If you’re lucky—and these days, very lucky—you get a chunk of money to grow your bacteria or smash your electrons together or whatever, you write up your results for publication, and this is where the monkey business really begins.

In most disciplines, the next step is sending your paper to a peer-reviewed journal, where it gets evaluated by an editor and (if the editor sees some promise in it) a few reviewers. These people are academics just like you, and they generally do not get paid for their time. Editors maybe get a small stipend and a bit of professional cred, while reviewers get nothing but the warm fuzzies of doing “service to the field”, or the cold thrill of tanking other people’s papers.

If you’re lucky again, your paper gets accepted by the journal, which now owns the copyright to your work. They do not pay you for this! If anything, you pay them an “article processing charge” for the privilege of no longer owning the rights to your paper. This is considered a great honor.

The journals then paywall your work, sell the access back to you and your colleagues, and pocket the profit. Universities cover these subscriptions and fees by charging the government “indirect costs” on every grant—money that doesn’t go to the research itself, but to all the things that support the research, like keeping the lights on, cleaning the toilets, and accessing the journals that the researchers need to read.

Nothing about this system makes sense, which is why I think we should build a new one. In the meantime, though, we should also fix the old one. But that’s hard, for two reasons. First, many people are invested in things working exactly the way they do now, so every stupid idea has a constituency behind it. Second, our current administration seems to believe in policy by bloodletting: if something isn’t working, just slice it open at random. Thanks to these haphazard cuts and cancellations, we now have a system that is both dysfunctional and anemic.

I see a way to solve both problems at once. We can satisfy both the scientists and the scalpel-wielding politicians by ridding ourselves of the one constituency that should not exist. Of all the crazy parts of our crazy system, the craziest part is where taxpayers pay for the research, then pay private companies to publish it, and then pay again so scientists can read it. We may not agree on much, but we can all agree on this: it is time, finally and forever, to get rid of for-profit scientific publishers.

MOMMY, WHERE DO SCAMS COME FROM?

The writer G.K. Chesterton once said that before you knock anything down, you ought to know how it got there in the first place. So before we show for-profit publishers the pointy end of a pitchfork, we ought to know where they came from and why they persist.

It used to be a huge pain to produce a physical journal—someone had to operate the printing presses, lick the stamps, and mail the copies all over the world. Unsurprisingly, academics didn’t care much about doing those things. When government money started flowing into universities post-World War II and the number of articles exploded, private companies were like, “Hey, why don’t we take these journals off your hands—you keep doing the scientific stuff and we’ll handle all the boring stuff.” And the academics were like “Sounds good, we’re sure this won’t have any unforeseen consequences.”

Those companies knew they had a captive audience, so they bought up as many journals as they could. Journal articles aren’t interchangeable commodities like corn or soybeans—if your science supplier starts gouging you, you can’t just switch to a new one. Adding to this lock-in effect, publishing in “high-impact” journals became the key to success in science, which meant if you wanted to move up, your university had to pay up. So, even as the internet made it much cheaper to produce a journal, publishers made it much more expensive to subscribe to one.

Robert Maxwell, one of the architects of the for-profit scientific publishing scheme. When he later went into debt, he plundered hundreds of millions of pounds from his employees’ pension funds. You may be familiar with his daughter and lieutenant Ghislaine Maxwell, who went on to have a successful career in child trafficking. (source)

The people running this scam had no illusions about it, even if they hoped that other people did. Here’s how one CEO described it:

You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. [...] [and then] we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.

So here’s the report we can make to Mr. Chesterton: for-profit scientific publishers arose to solve the problem of producing physical journals. The internet mostly solved that problem. Now the publishers are the problem. These days, Springer Nature, Elsevier, Wiley, and the like are basically giant operations that proofread, format, and store PDFs. That’s not nothing, but it’s pretty close to nothing.

No one knows how much publishers make in return for providing these modest services, but we can guess. In 2017, the Association of Research Libraries surveyed its 123 member institutions and found they were paying a collective $1 billion in journal subscriptions every year. The ARL covers some of the biggest universities, but not nearly all of them, so let’s guess that number accounts for half of all university subscription spending. In 2023, the federal government estimated it paid nearly $380 million in article processing charges alone, and those are separate from subscriptions. So it wouldn’t be crazy if American universities were paying something like $2.5 billion to publishers every year, with the majority of that ultimately coming from taxpayers.

(By the way, the estimated profit margins for commercial scientific publishers are around 40%, which is higher than Microsoft.)

To put those costs in perspective: if the federal government cut out the publishers, it would probably save more money every year than it has “saved” in its recent attempts to cut off scientific funding to universities. It’s unclear how much money will ultimately be clawed back, as grants continue to get frozen, unfrozen, litigated, and negotiated. But right now, it seems like ~$1.4 billion in promised science funding is simply not going to be paid out. We could save more than that every year if we just stopped writing checks to John Wiley & Sons.

PUNK ROCK SCIENCE

How can such a scam continue to exist? In large part, it’s because of a computer hacker from Kazakhstan.

The political scientist James C. Scott once wrote that many systems only “work” because people disobey them. For instance, the Soviet Union attempted to impose agricultural regulations so strict that people would have starved if they followed the letter of the law. Instead, citizens grew and traded food in secret. This made it look like the regulations were successful, when in fact they were a sham.1

Something similar is happening right now in science, except Russia is on the opposite side of the story this time. In the early 2010s, a Kazakhstani computer programmer named Alexandra Elbakyan started downloading articles en masse and posting them publicly on a website called SciHub. The publishers sued her, so she’s hiding out in Russia, which protects her from extradition. As you can see in the map below, millions of people now use SciHub to access scientific articles, including lots of people who seem to work at universities:

This data is ten years old, so I would expect these numbers to be higher today. (source)

Why would researchers resort to piracy when they have legitimate access themselves? Maybe because journals’ interfaces are so clunky and annoying that it’s faster to go straight to SciHub. Or maybe it’s because those researchers don’t actually have access. Universities are always trying to save money by canceling journal subscriptions, so academics often have to rely on bootleg copies. Either way, SciHub seems to be our modern-day version of those Soviet secret gardens: for-profit publishing only “works” because people find ways to circumvent it.

Alexandra Elbakyan, “Pirate Queen of Science” (source)

In a punk rock kind of way, it’s kinda cool that so many American scientists can only do their work thanks to a database maintained by a Russia-backed fugitive. But it ought to be a huge embarrassment to the US government.2

Instead, for some reason, the government insists on siding with publishers against citizens. Sixteen years ago, the US had its own Elbakyan. His name was Aaron Swartz. He downloaded millions of paywalled journal articles using a connection at MIT, possibly intending to share them publicly. Government agents arrested him, charged him with wire fraud, and intended to fine him $1 million and imprison him for 35 years. Instead, he killed himself. He was 26.

Swartz with glasses, smiling with Jason Scott (cut off from the picture from the left)
Swartz in 2011, two years before his death (source)

THE FOREST FIRE IS OVERDUE

Scientists have tried to take on the middlemen themselves. They’ve founded open-access journals. They’ve published preprints. They’ve tried alternative ways of evaluating research. A few high-profile professors have publicly and dramatically sworn off all “luxury” outlets, and less-famous folks have followed suit: in 2012, over 10,000 researchers signed a pledge not to publish in any journals owned by Elsevier.

None of this has worked. The biggest for-profit publishers continue making more money year after year. “Diamond” open access journals—that is, publications that don’t charge authors or readers—only account for ~10% of all articles.3 Four years after that massive pledge, 38% of signers had broken their promise and published in an Elsevier journal.4

These efforts have fizzled because this isn’t a problem that can be solved by any individual, or even many individuals. Academia is so cutthroat that anyone who righteously gives up an advantage will be outcompeted by someone who has fewer scruples. What we have here is a collective action problem.

Fortunately, we have an organization that exists for the express purpose of solving collective action problems. It’s called the government. And as luck would have it, they’re also the one paying most of the bills!

So the solution here is straightforward: every government grant should stipulate that the research it supports can’t be published in a for-profit journal. That’s it! If the public paid for it, it shouldn’t be paywalled.

The Biden administration tried to do this, but they did it in a stupid way. They mandated that NIH-funded research papers have to be “open access”, which sounds like a solution, but it’s actually a psyop. By replacing subscription fees with “article processing charges”, publishers can simply make authors pay for writing instead of making readers pay for reading. The companies can keep skimming money off the system, and best of all, they get to call the result “open access”.

These fees can be wild. When my PhD advisor and I published one of our papers together, the journal charged us an “open access” fee of $12,000. This arrangement is a tiny bit better than the alternative, because at least everybody can read our paper now, including people who aren’t affiliated with a university. But those fees still have to come from somewhere, and whether you charge writers or readers, you’re ultimately charging the same account—namely, the US government.5

The Trump administration somehow found a way to make a stupid policy even stupider. They sped up the timeline while also firing a bunch of NIH staffers—exactly the people who would make sure that government-sponsored publications are, in fact, publicly accessible. And you need someone to check on that, because researchers are notoriously bad about this kind of stuff. They’re already required to upload the results of clinical trials to a public database, but more than half the time they just...don’t.

To do this right, you cannot allow the rent-seekers to rebrand. You have to cut them out entirely. I don’t think this will fix everything that’s wrong with science; it will merely fix the wrongest thing. Nonprofit journals still charge fees, but at least the money goes to organizations that ostensibly care about science, rather than going to CEOs who make $17 million a year. And almost every journal, for-profit or not, uses the same failed system of peer review. The biggest benefit of shaking things up, then, would be allowing different approaches to have a chance at life, the same way an occasional forest fire clears away the dead wood, opens up the pinecones, and gives seedlings a shot at the sunlight.

Science philanthropies should adopt the same policy, and some of them already have. The Navigation Fund, which oversees billions of dollars in scientific funding, no longer bankrolls journal publications at all. , its director, reports that the experiment has been a great success:

Our researchers began designing experiments differently from the start. They became more creative and collaborative. The goal shifted from telling polished stories to uncovering useful truths. All results had value, such as failed attempts, abandoned inquiries, or untested ideas, which we frequently release through Arcadia’s Icebox. The bar for utility went up, as proxies like impact factors disappeared.

Sounds good to me!

CATCH THE TIGER

Fifteen years ago, the open science movement was all about abolishing for-profit journals—that’s what open science meant. It seemed like every speech would end with “ELSEVIER DELENDA EST”.

Now people barely bring it up at all.6 It’s like a lion has escaped the zoo and it’s gulping down schoolchildren, but when people suggest zoo improvements, all the agenda items are like, “We should add another Dippin’ Dots kiosk”. If you bring up the loose tiger, everyone gets annoyed at you, like “Of course, no one likes the tiger”.

I think two things happened. First, we got cynical about cyberspace. In the 1990s and 2000s, we really thought the internet would solve most of our problems. When those problems persisted despite all of us getting broadband, we shifted to thinking that the internet was, in fact, causing the problems. And so it became cringe to think the internet could ever be a force for good. In 1995, for-profit publishers were going to be “the internet’s first victim”; in 2015, they were “the business the internet could not kill”.

Second, when the replication crisis hit in the early 2010s, the open science movement got a new villain—namely, naughty researchers. The fakers, the fraudsters, the over-claimers: those are the real bad boys of science. It’s no longer cool to hate international publishing conglomerates. Now it’s cool to hate your colleagues.

Both of these shifts were a shame. The internet utopians were right that the web would eliminate the need for journals, but they were wrong to think that would be enough. The replication police were right to call out scientific malfeasance, but they were wrong to forget our old foes. The for-profit publishers are just as bad as they ever were, and while the internet has made them more vulnerable then ever, now we know they won’t go unless they’re pushed.

If we want better science, we should catch the tiger. Not only because it’s bad for the tiger to be loose, but because it’s bad for us to look the other way. If you allow an outrageous scam to go unchecked, if you participate in it, normalize it—then what won’t you do? Why not also goose your stats a bit? Why not publish some junk research? Look around: no one cares!

There are so many problems with our current way of doing things, and most of those problems are complicated and difficult to solve. This one isn’t. Let’s heave this succubus off our scientific system and end this scam once and for all. After that, Dippin’ Dots all around.

Experimental History opposes the tiger and supports ice cream, in that order

1

Seeing Like a State, 203-204, 310

2

For anyone who is all-in on “America First”: may I also mention that three of the largest publishers—Springer Nature, Elsevier, and Taylor and Francis—are all British-owned. A curious choice of companies to subsidize!

3

Don’t get me started on this “diamond open access” designation. If it costs money to publish or to read, it’s not open access, period. “Oh, you’d like your car to come with a steering wheel and brakes? You’ll need our ‘diamond’ package.”

4

I assume this number is much higher now. At the time, Elsevier controlled 16% of the market, so most people could continuing publish in their usual journals without breaking their pledge. I started graduate school in 2016, and I never heard anyone mention avoiding Elsevier journals at all.

5

The NIH has announced vague plans to cap these charges, which is kind of like saying, “I’ll let you scam me, but just don’t go crazy about it.”

6

For example, the current strategic plan of the Center for Open Science doesn’t mention for-profit journals at all.

Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete

Is AI Already Killing People by Accident?

1 Share

This post first appeared on Marcus on AI, and is reposted with the permission of the author.

The writer Tyler Austin Harper (of The Atlantic, etc.) sent me a thread this morning, asking whether a mistargeting yesterday (February 28) that killed nearly 150 school children in Iran could have been the result of AI.

I can give only two intellectually honest answers:

The first is: I have no idea what happened yesterday, and probably I will never know. Secretary of Defense Pete Hegseth has made a very heavy bet on AI in the military, and it’s doubtful that he will be entirely forthcoming about this or other incidents to come. Targeting errors aren’t new. We may get little detail about AI’s role or non-role in incidents in the future.

Then again, as Harper notes, maybe this particular one is not a coincidence.

The second is: We can certainly expect incidents of this type, and more of them. Generative AI continues to have serious problems with reasoning and with visual cognition, as, for example, a series of studies by Anh Totti Nguyen has shown, a string of papers like these:

And of course generative AI still has problems with common sense, as an endless of stream of examples have shown, this one involving circulating on X today:

Meanwhile, unless and until the military does actual empirical studies on collateral damage, we won’t really know whether AI is helping or hurting. Mistargeting isn’t new, but using unreliable AI on vibes is fraught with peril.

(More broadly, the military use of AI needs to be a granular question. We might find, for example, that AI helps with logistics and planning but makes more errors in targeting. Mileage may vary according to the task, and may be worse in unfamiliar situations, given the inherent tendencies of generative AI.)

There is a second problem, a moral problem, that goes beyond the technical. The technical problem is that current AI simply isn’t reliable; mistakes will absolutely made. Some will cost lives, some will cost many lives. Some may lead to further escalation (a mass killing of school children could well do that); in the worst case, a series of escalations triggered by AI-triggered mistakes could lead to a nuclear war. Given the current status in the Middle East, this concern is not merely academic.

The moral problem is that militaries may well wish to use AI to cloak moral responsibility. One can, for example, use an AI tool to select targets and blame the AI. It is important to realize that real choices are made at the front end by those who use the AI. How many civilian casualties are acceptable? What error rate is permissible? AI can follow a set of criteria (with more or less precision depending on the quality of the algorithms and data), but humans set those criteria. In my own view, the biggest problem with the algorithms targeting Gaza was not necessarily the algorithms per se (about which not much may be public) but the decision to tolerate a large number of civilian casualties as part of the targeting.

By analogy, if one rolled dice (a physical instantiation of a very simple algorithm) to pick targets, one would not blame the dice for the deaths, but those who chose to leave life or death to chance in the first place.

We should absolutely want any AI that is used for war to be as precise and reliable as possible, minimizing casualties, but we should also never forget that those who use them are responsible for decisions about how many casualties are acceptable. And it should be incumbent on them to understand the limitations and inaccuracies of the algorithms they choose.

Whether current AI algorithms are precise (probably they aren’t), and whether humans are involved in the specific selection of targets, those who use military algorithms bear responsibility for the outcomes they produce.

The race to shove AI into everything is grossly premature, because the tech fundamentally lacks reliability.

Meanwhile, the chance that we will get straight answers is probably close to zero.

Altman, for his part, doesn’t seem to care, having signed a contract entirely full of holes, despite making noises about red lines.

I don’t want to say “this sucks,” but this situation really and truly sucks. Many people, perhaps thousands, maybe more, will die, needlessly.

Gary Marcus

Gary Marcus (@garymarcus), scientist, best-selling author, and entrepreneur, is deeply concerned about current AI, but really hoping we might do better. He spoke to the U.S. Senate on May 16, 2023 and is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

The Solution to the Male Loneliness Epidemic Is for Men to Bust Science Myths with Each Other

1 Share

Men, guys, dudes, rejoice! After much research and testing, we have found the cure to the cursed male loneliness epidemic that is sweeping our country and our op-ed sections. We know you feel isolated. We know you can’t talk about your emotions. We know you’re looking for male role models in all the wrong YouTube algorithms. But fear not. We have found the solution to all your problems: doing outlandish science projects to prove or disprove commonplace myths.

Men these days are reverting to masculine ideals from yesteryear. They think real men have to be strong, tough, and misogynistic. Listen, boys, you don’t need big muscles, you don’t need creatine powder, and you certainly don’t need to get surgery to gain an extra few inches of height because you’d rather have metal implants in your legs than be 5′4″. All you really need is a curious mind, a pure heart, and military-level access to high-powered explosives. And also a seemingly endless supply of crash-test dummies.

Where do men typically make friends? The gym. School. Work. These places can be great for building connections, but they can also reinforce harmful ideas about masculinity (just think of how much is shrugged off as “locker-room talk”). Doing bizarre and often comical science experiments with your friends is a way to avoid that toxic environment, and instead introduce men to a different kind of toxic environment, where, for example, they measure how long it would take a balloon filled with poison to spread its noxious air.

“But,” we hear you ask, “how do I know if any of my dude friends will even want to solve science mysteries with me?” Good news, they all do. We did multiple studies where we gathered men together and asked them things like, “So, how many times do you think you can fold a sheet of paper?” or “Do you think you’d be able to find a needle in a haystack?” and every time, each of the men wanted to try to do the thing immediately. Seriously. We had to pull some of the men away from the haystacks because they got so into it. Men are absolutely itching to solve little puzzles and then tell everyone about how they solved a little puzzle.

We hear your concerns about the manosphere. We hear your concerns about protein-powder consumption. We hear your concerns about men spending time doing mouth exercises to improve their jawlines. We cannot hear anything else because we have noise-canceling headphones on our ears while we try to see if we can light a match with a gun. This is one of the many experiments that male friends can do together instead of watching Andrew Tate videos or posting derogatory things on Sydney Sweeney’s Instagram stories.

Men, we know you feel lost. The world is full of unknowns, like Who am I? What is my purpose? How many Mentos would I need to drop into a bottle of Coke to bust open a door? Doing weird science experiments with your friends can answer at least one, if not more of these questions. And even when we finally get to the bottom of every science quandary there is, hope is still not lost. There’s always Jackass.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Harvard study finds AI actually makes work harder rather than easier

1 Share
The study suggests generative AI tools may increase workloads rather than reduce them, challenging one of the technology’s promoted benefits.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Underrated reasons to dislike AI

2 Shares

The big arguments for and against AI have been endlessly discussed, and I don’t feel I have much to add. AGI and existential risk; human obsolescence; power use; cybersecurity; safety + censorship; slop; misinformation. Also, I’m kind of tired of everything being about AI, which is why I have specifically avoided writing about it.

But here is a list of petty grievances with AI, that don’t make the cover of WIRED.

  • Basically none of it is actually open source. Lots of the tooling is, but “open weights” is fundamentally different, and worse, than source available, let alone open source. There can be targeted attacks, or blatant censorship, in open weights models. They’re a black box: they aren’t safe. And none of the state-of-the-art models release their code + training data.

    • This is not just big companies are bad: it’s partly because the training data is huge (and usually pirated copyrighted material, and too poorly vetted to want to make public).
  • AI is centralized even though that’s bad architecture. Check what server you’re connecting to when working with local LLMs: it’s invariably HuggingFace. But models are huge, and though there would be certain legal and technical hurdles, BitTorrent would be a far more efficient technology to distribute open weight models.

  • AI makes Nvidia rich, and I don’t like Nvidia because their Linux support sucks ♥

  • Because it’s so resource intensive, AI is even more of a Matthew effect than technology in general. In the local LLM world, it’s the people who can afford a late-model Macbook with 64–128gb RAM and a hefty GPU that get to use the actually good models while maintaining their sovereignty, so the people who are empowered become more empowered. This gets even worse the farther up the chain: the capital requirements mean they are only a handful of frontier AI companies worldwide.

  • AI is fundamentally non-deterministic. At societal and epistemic levels, this is, obviously, disastrous. AI has no conception of truth: only probability of seeing it in the training data. Non-determinism can’t be “aligned” away with RLHF: it’s baked in. Tesseract is vastly worse than AI OCR. But the damage a Tesseract error can cause is bounded. The damage of what AI can hallucinate is practically unbounded. This means some of the practical advantages of AI are offset by how carefully the transcript/code/OCR must be reviewed if it’s being used in a context where the truth/meaning matters.

  • AI’s mistakes are less obvious. Surface-level mistakes are a red flag in human output. AI makes fewer surface level mistakes, but more fundamental errors. Because our heuristics are trained on human outputs, this makes it seem more trustworthy than it is.

  • AI adds another layer between humans and the world, distancing them from the consequences of their choices. People spend more money when they use a credit card than when they use cash. Cash is an abstraction layer on work. Credit cards are an abstraction layer on an abstraction layer, making it even more convenient to spend money. I worry that this distancing will make (for example) waging war with semi-autonomous weapons, or just trading on the stock market, feel even more like a video game than it currently does, and buffer the operator from the real-world consequences of their operations. Anyone who has read Ender’s Game knows that this kind of gamification can end badly.

  • AI feels grievously inefficient. It took 29.29 minutes to OCR the 4-page handwritten draft of this essay with Qwen3-VL:8B. I didn’t get exact power draw, but it was likely at least 45 watts, and could have been up to the rated 110 watts TPD. A human brain is estimated to draw ~20 watts, and could do the task in a fraction of the time. This feels...wasteful. And this is just inference! (To be fair, humans require training, also.)

    • Probably, much of this will be optimized away. And part of it is inherent to general-purpose systems, which are, by definition, not optimized for a given task. After all, many operations that were once too inefficient for widespread use — full disk encryption, VPNs — are now widespread.
  • Without knowing what consciousness is, we won’t know whether AI has become conscious. We also don’t know whether whether AI is conscious will matter. Which is a scarier: conscious AGI, or AGI without consciousness?

  • AI makes me feel dumb.

Note: this post is part of #100DaysToOffload, a challenge to publish 100 posts in 365 days. These posts are generally shorter and less polished than our normal posts; expect typos and unfiltered thoughts! View more posts in this series.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories