1486 stories
·
2 followers

Is AI Already Killing People by Accident?

1 Share

This post first appeared on Marcus on AI, and is reposted with the permission of the author.

The writer Tyler Austin Harper (of The Atlantic, etc.) sent me a thread this morning, asking whether a mistargeting yesterday (February 28) that killed nearly 150 school children in Iran could have been the result of AI.

I can give only two intellectually honest answers:

The first is: I have no idea what happened yesterday, and probably I will never know. Secretary of Defense Pete Hegseth has made a very heavy bet on AI in the military, and it’s doubtful that he will be entirely forthcoming about this or other incidents to come. Targeting errors aren’t new. We may get little detail about AI’s role or non-role in incidents in the future.

Then again, as Harper notes, maybe this particular one is not a coincidence.

The second is: We can certainly expect incidents of this type, and more of them. Generative AI continues to have serious problems with reasoning and with visual cognition, as, for example, a series of studies by Anh Totti Nguyen has shown, a string of papers like these:

And of course generative AI still has problems with common sense, as an endless of stream of examples have shown, this one involving circulating on X today:

Meanwhile, unless and until the military does actual empirical studies on collateral damage, we won’t really know whether AI is helping or hurting. Mistargeting isn’t new, but using unreliable AI on vibes is fraught with peril.

(More broadly, the military use of AI needs to be a granular question. We might find, for example, that AI helps with logistics and planning but makes more errors in targeting. Mileage may vary according to the task, and may be worse in unfamiliar situations, given the inherent tendencies of generative AI.)

There is a second problem, a moral problem, that goes beyond the technical. The technical problem is that current AI simply isn’t reliable; mistakes will absolutely made. Some will cost lives, some will cost many lives. Some may lead to further escalation (a mass killing of school children could well do that); in the worst case, a series of escalations triggered by AI-triggered mistakes could lead to a nuclear war. Given the current status in the Middle East, this concern is not merely academic.

The moral problem is that militaries may well wish to use AI to cloak moral responsibility. One can, for example, use an AI tool to select targets and blame the AI. It is important to realize that real choices are made at the front end by those who use the AI. How many civilian casualties are acceptable? What error rate is permissible? AI can follow a set of criteria (with more or less precision depending on the quality of the algorithms and data), but humans set those criteria. In my own view, the biggest problem with the algorithms targeting Gaza was not necessarily the algorithms per se (about which not much may be public) but the decision to tolerate a large number of civilian casualties as part of the targeting.

By analogy, if one rolled dice (a physical instantiation of a very simple algorithm) to pick targets, one would not blame the dice for the deaths, but those who chose to leave life or death to chance in the first place.

We should absolutely want any AI that is used for war to be as precise and reliable as possible, minimizing casualties, but we should also never forget that those who use them are responsible for decisions about how many casualties are acceptable. And it should be incumbent on them to understand the limitations and inaccuracies of the algorithms they choose.

Whether current AI algorithms are precise (probably they aren’t), and whether humans are involved in the specific selection of targets, those who use military algorithms bear responsibility for the outcomes they produce.

The race to shove AI into everything is grossly premature, because the tech fundamentally lacks reliability.

Meanwhile, the chance that we will get straight answers is probably close to zero.

Altman, for his part, doesn’t seem to care, having signed a contract entirely full of holes, despite making noises about red lines.

I don’t want to say “this sucks,” but this situation really and truly sucks. Many people, perhaps thousands, maybe more, will die, needlessly.

Gary Marcus

Gary Marcus (@garymarcus), scientist, best-selling author, and entrepreneur, is deeply concerned about current AI, but really hoping we might do better. He spoke to the U.S. Senate on May 16, 2023 and is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.

Read the whole story
mrmarchant
7 hours ago
reply
Share this story
Delete

The Solution to the Male Loneliness Epidemic Is for Men to Bust Science Myths with Each Other

1 Share

Men, guys, dudes, rejoice! After much research and testing, we have found the cure to the cursed male loneliness epidemic that is sweeping our country and our op-ed sections. We know you feel isolated. We know you can’t talk about your emotions. We know you’re looking for male role models in all the wrong YouTube algorithms. But fear not. We have found the solution to all your problems: doing outlandish science projects to prove or disprove commonplace myths.

Men these days are reverting to masculine ideals from yesteryear. They think real men have to be strong, tough, and misogynistic. Listen, boys, you don’t need big muscles, you don’t need creatine powder, and you certainly don’t need to get surgery to gain an extra few inches of height because you’d rather have metal implants in your legs than be 5′4″. All you really need is a curious mind, a pure heart, and military-level access to high-powered explosives. And also a seemingly endless supply of crash-test dummies.

Where do men typically make friends? The gym. School. Work. These places can be great for building connections, but they can also reinforce harmful ideas about masculinity (just think of how much is shrugged off as “locker-room talk”). Doing bizarre and often comical science experiments with your friends is a way to avoid that toxic environment, and instead introduce men to a different kind of toxic environment, where, for example, they measure how long it would take a balloon filled with poison to spread its noxious air.

“But,” we hear you ask, “how do I know if any of my dude friends will even want to solve science mysteries with me?” Good news, they all do. We did multiple studies where we gathered men together and asked them things like, “So, how many times do you think you can fold a sheet of paper?” or “Do you think you’d be able to find a needle in a haystack?” and every time, each of the men wanted to try to do the thing immediately. Seriously. We had to pull some of the men away from the haystacks because they got so into it. Men are absolutely itching to solve little puzzles and then tell everyone about how they solved a little puzzle.

We hear your concerns about the manosphere. We hear your concerns about protein-powder consumption. We hear your concerns about men spending time doing mouth exercises to improve their jawlines. We cannot hear anything else because we have noise-canceling headphones on our ears while we try to see if we can light a match with a gun. This is one of the many experiments that male friends can do together instead of watching Andrew Tate videos or posting derogatory things on Sydney Sweeney’s Instagram stories.

Men, we know you feel lost. The world is full of unknowns, like Who am I? What is my purpose? How many Mentos would I need to drop into a bottle of Coke to bust open a door? Doing weird science experiments with your friends can answer at least one, if not more of these questions. And even when we finally get to the bottom of every science quandary there is, hope is still not lost. There’s always Jackass.

Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

Harvard study finds AI actually makes work harder rather than easier

1 Share
The study suggests generative AI tools may increase workloads rather than reduce them, challenging one of the technology’s promoted benefits.

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

Underrated reasons to dislike AI

2 Shares

The big arguments for and against AI have been endlessly discussed, and I don’t feel I have much to add. AGI and existential risk; human obsolescence; power use; cybersecurity; safety + censorship; slop; misinformation. Also, I’m kind of tired of everything being about AI, which is why I have specifically avoided writing about it.

But here is a list of petty grievances with AI, that don’t make the cover of WIRED.

  • Basically none of it is actually open source. Lots of the tooling is, but “open weights” is fundamentally different, and worse, than source available, let alone open source. There can be targeted attacks, or blatant censorship, in open weights models. They’re a black box: they aren’t safe. And none of the state-of-the-art models release their code + training data.

    • This is not just big companies are bad: it’s partly because the training data is huge (and usually pirated copyrighted material, and too poorly vetted to want to make public).
  • AI is centralized even though that’s bad architecture. Check what server you’re connecting to when working with local LLMs: it’s invariably HuggingFace. But models are huge, and though there would be certain legal and technical hurdles, BitTorrent would be a far more efficient technology to distribute open weight models.

  • AI makes Nvidia rich, and I don’t like Nvidia because their Linux support sucks ♥

  • Because it’s so resource intensive, AI is even more of a Matthew effect than technology in general. In the local LLM world, it’s the people who can afford a late-model Macbook with 64–128gb RAM and a hefty GPU that get to use the actually good models while maintaining their sovereignty, so the people who are empowered become more empowered. This gets even worse the farther up the chain: the capital requirements mean they are only a handful of frontier AI companies worldwide.

  • AI is fundamentally non-deterministic. At societal and epistemic levels, this is, obviously, disastrous. AI has no conception of truth: only probability of seeing it in the training data. Non-determinism can’t be “aligned” away with RLHF: it’s baked in. Tesseract is vastly worse than AI OCR. But the damage a Tesseract error can cause is bounded. The damage of what AI can hallucinate is practically unbounded. This means some of the practical advantages of AI are offset by how carefully the transcript/code/OCR must be reviewed if it’s being used in a context where the truth/meaning matters.

  • AI’s mistakes are less obvious. Surface-level mistakes are a red flag in human output. AI makes fewer surface level mistakes, but more fundamental errors. Because our heuristics are trained on human outputs, this makes it seem more trustworthy than it is.

  • AI adds another layer between humans and the world, distancing them from the consequences of their choices. People spend more money when they use a credit card than when they use cash. Cash is an abstraction layer on work. Credit cards are an abstraction layer on an abstraction layer, making it even more convenient to spend money. I worry that this distancing will make (for example) waging war with semi-autonomous weapons, or just trading on the stock market, feel even more like a video game than it currently does, and buffer the operator from the real-world consequences of their operations. Anyone who has read Ender’s Game knows that this kind of gamification can end badly.

  • AI feels grievously inefficient. It took 29.29 minutes to OCR the 4-page handwritten draft of this essay with Qwen3-VL:8B. I didn’t get exact power draw, but it was likely at least 45 watts, and could have been up to the rated 110 watts TPD. A human brain is estimated to draw ~20 watts, and could do the task in a fraction of the time. This feels...wasteful. And this is just inference! (To be fair, humans require training, also.)

    • Probably, much of this will be optimized away. And part of it is inherent to general-purpose systems, which are, by definition, not optimized for a given task. After all, many operations that were once too inefficient for widespread use — full disk encryption, VPNs — are now widespread.
  • Without knowing what consciousness is, we won’t know whether AI has become conscious. We also don’t know whether whether AI is conscious will matter. Which is a scarier: conscious AGI, or AGI without consciousness?

  • AI makes me feel dumb.

Note: this post is part of #100DaysToOffload, a challenge to publish 100 posts in 365 days. These posts are generally shorter and less polished than our normal posts; expect typos and unfiltered thoughts! View more posts in this series.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

The Top Hat Illusion

1 Share

https://archive.org/details/B-001-014-611/page/n69/mode/2up

A striking oddity from Matthew Luckiesh’s Visual Illusions, 1922. The height of this silk hat appears much greater than its width, but the two are the same.

“A pole or a tree is generally appraised as of greater length when it is standing than when it lies on the ground. This illusion may be demonstrated by placing a black dot an inch or so above another on a white paper. Now, at right angles to the original dot place another at a horizontal distance which appears equal to the vertical distance of the first dot above the original. On turning the paper through ninety degrees or by actual measurement, the extent of the illusion will become apparent.”

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

TEACHER VOICE: We don’t have a math problem in Arkansas or in the United States. We have a culture problem

1 Share

For 23 years, I’ve taught high school math. And for 23 years, I’ve been told by people that they either are a “math person” or they are not. 

I get it: Math isn’t easy. Movies and TV shows make it look effortless for a select few. But math is hard work. If you don’t do the work, and if you don’t have a teacher who can help you build the math skills you need, you may struggle with math. Then you might internalize these challenges into the idea that you’re not a “math person.”  

Research shows, however, that the idea of “math people” is a myth. In his book “How We Learn,” the neuroscientist Stanislas Dehaene refutes the notion that some brains are uniquely “wired” for math. He writes that all people have “the same initial brain structure, the same core knowledge, and the same learning algorithms” for reading, science and math. All people can learn to do math.  

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

Where people differ is their mindset. Some people have what Stanford professor Carol Dweck refers to as a “fixed mindset,” or a belief that intelligence or talent is set in stone. When they fail, they see it as proof they lack ability, so they often avoid challenges or give up easily. Other people have a “growth mindset,” or a belief that intelligence and ability can develop through effort, feedback and learning. People with this mindset view mistakes as part of the process. Challenges are chances to improve. The growth mindset is how most people approach a video game. You don’t know what you are getting into, you try your best and if you fail, you know more and try again.  

I teach geometry in Arkansas, and of all the tests the state administers, students perform most poorly on the geometry exam. My colleagues and I at Rogers High School — plus a bevy of research — are proving that this poor performance is not because some students cannot learn math.  

My four colleagues on the geometry team and I were able to support our students in exceeding their expected growth goals. We attained these results by believing that our students can do geometry and by getting them to believe the same.  

Stanford math professor Jo Boaler proved what’s possible with an innovative study that showed how an online course could change student ideas about learning mathematics and their own potential. 

More than 1,000 students from four schools took the course — and it shifted their ideas about whether intelligence is changeable. Boaler told Frontiers, a science news outlet, that targeting students’ beliefs about math “led to students feeling more positive about math, more engaged during math class, and scoring significantly higher in mathematics assessments.” 

Related: PROOF POINTS: A little parent math talk with kids might really add up, a new body of education research suggests 

While I work as hard as I can for all 178 days of the school year, helping students believe in their capability to do math, especially geometry, also requires support outside of the classroom.  

Parents, we need your help. This idea of some people having a “math brain” comes up often at parent-teacher conferences. Adults will say that they are “not good at math,” or are not a “math person,” which can have a negative effect on how their kids see their own capabilities.  

Parents, you can have a positive effect if you adjust how you talk about math, including your own struggles. Acknowledge challenges in school and what could have helped you view the challenges as opportunities. It is important for kids to hear their parents talk about working through problems instead of giving up. I was fortunate to have parents who owned a small business, because I got to witness them struggle through problems and find solutions. 

Encourage your kids to develop a growth mindset. Talk about and teach the behaviors that can support your kids’ learning and growth. These include investing time in the work and engaging with teachers during class or tutoring to learn how to better understand mathematical concepts. Problem-solving is a learned skill, so point out how math shows up in daily life and that your kids often solve problems without even recognizing it.  

It is imperative that we show dramatic math improvement across the country. Trouble is on the horizon: The American workforce expects an unmet need for over a million employees to fill STEM-related jobs by 2030. Yet student performance is lower today than it was before the pandemic. The National Assessment of Educational Progress, known as the Nation’s Report Card, reported that the achievement gap in 8th grade math last year was the largest in the history of the exam.  

But again, we don’t have a math problem in Arkansas or in the United States. We have a culture problem in that math is viewed negatively and stereotypes abound. The good news is that we can fix it by addressing mindsets.  

As I say to my students every day, thank you for your time. 

Mark Bauer teaches math at Rogers High School in northwest Arkansas. 

Contact the opinion editor at opinion@hechingerreport.org. 

This story about teaching math was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The post TEACHER VOICE: We don’t have a math problem in Arkansas or in the United States. We have a culture problem appeared first on The Hechinger Report.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories