1484 stories
·
2 followers

Harvard study finds AI actually makes work harder rather than easier

1 Share
The study suggests generative AI tools may increase workloads rather than reduce them, challenging one of the technology’s promoted benefits.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Underrated reasons to dislike AI

2 Shares

The big arguments for and against AI have been endlessly discussed, and I don’t feel I have much to add. AGI and existential risk; human obsolescence; power use; cybersecurity; safety + censorship; slop; misinformation. Also, I’m kind of tired of everything being about AI, which is why I have specifically avoided writing about it.

But here is a list of petty grievances with AI, that don’t make the cover of WIRED.

  • Basically none of it is actually open source. Lots of the tooling is, but “open weights” is fundamentally different, and worse, than source available, let alone open source. There can be targeted attacks, or blatant censorship, in open weights models. They’re a black box: they aren’t safe. And none of the state-of-the-art models release their code + training data.

    • This is not just big companies are bad: it’s partly because the training data is huge (and usually pirated copyrighted material, and too poorly vetted to want to make public).
  • AI is centralized even though that’s bad architecture. Check what server you’re connecting to when working with local LLMs: it’s invariably HuggingFace. But models are huge, and though there would be certain legal and technical hurdles, BitTorrent would be a far more efficient technology to distribute open weight models.

  • AI makes Nvidia rich, and I don’t like Nvidia because their Linux support sucks ♥

  • Because it’s so resource intensive, AI is even more of a Matthew effect than technology in general. In the local LLM world, it’s the people who can afford a late-model Macbook with 64–128gb RAM and a hefty GPU that get to use the actually good models while maintaining their sovereignty, so the people who are empowered become more empowered. This gets even worse the farther up the chain: the capital requirements mean they are only a handful of frontier AI companies worldwide.

  • AI is fundamentally non-deterministic. At societal and epistemic levels, this is, obviously, disastrous. AI has no conception of truth: only probability of seeing it in the training data. Non-determinism can’t be “aligned” away with RLHF: it’s baked in. Tesseract is vastly worse than AI OCR. But the damage a Tesseract error can cause is bounded. The damage of what AI can hallucinate is practically unbounded. This means some of the practical advantages of AI are offset by how carefully the transcript/code/OCR must be reviewed if it’s being used in a context where the truth/meaning matters.

  • AI’s mistakes are less obvious. Surface-level mistakes are a red flag in human output. AI makes fewer surface level mistakes, but more fundamental errors. Because our heuristics are trained on human outputs, this makes it seem more trustworthy than it is.

  • AI adds another layer between humans and the world, distancing them from the consequences of their choices. People spend more money when they use a credit card than when they use cash. Cash is an abstraction layer on work. Credit cards are an abstraction layer on an abstraction layer, making it even more convenient to spend money. I worry that this distancing will make (for example) waging war with semi-autonomous weapons, or just trading on the stock market, feel even more like a video game than it currently does, and buffer the operator from the real-world consequences of their operations. Anyone who has read Ender’s Game knows that this kind of gamification can end badly.

  • AI feels grievously inefficient. It took 29.29 minutes to OCR the 4-page handwritten draft of this essay with Qwen3-VL:8B. I didn’t get exact power draw, but it was likely at least 45 watts, and could have been up to the rated 110 watts TPD. A human brain is estimated to draw ~20 watts, and could do the task in a fraction of the time. This feels...wasteful. And this is just inference! (To be fair, humans require training, also.)

    • Probably, much of this will be optimized away. And part of it is inherent to general-purpose systems, which are, by definition, not optimized for a given task. After all, many operations that were once too inefficient for widespread use — full disk encryption, VPNs — are now widespread.
  • Without knowing what consciousness is, we won’t know whether AI has become conscious. We also don’t know whether whether AI is conscious will matter. Which is a scarier: conscious AGI, or AGI without consciousness?

  • AI makes me feel dumb.

Note: this post is part of #100DaysToOffload, a challenge to publish 100 posts in 365 days. These posts are generally shorter and less polished than our normal posts; expect typos and unfiltered thoughts! View more posts in this series.

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

The Top Hat Illusion

1 Share

https://archive.org/details/B-001-014-611/page/n69/mode/2up

A striking oddity from Matthew Luckiesh’s Visual Illusions, 1922. The height of this silk hat appears much greater than its width, but the two are the same.

“A pole or a tree is generally appraised as of greater length when it is standing than when it lies on the ground. This illusion may be demonstrated by placing a black dot an inch or so above another on a white paper. Now, at right angles to the original dot place another at a horizontal distance which appears equal to the vertical distance of the first dot above the original. On turning the paper through ninety degrees or by actual measurement, the extent of the illusion will become apparent.”

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

TEACHER VOICE: We don’t have a math problem in Arkansas or in the United States. We have a culture problem

1 Share

For 23 years, I’ve taught high school math. And for 23 years, I’ve been told by people that they either are a “math person” or they are not. 

I get it: Math isn’t easy. Movies and TV shows make it look effortless for a select few. But math is hard work. If you don’t do the work, and if you don’t have a teacher who can help you build the math skills you need, you may struggle with math. Then you might internalize these challenges into the idea that you’re not a “math person.”  

Research shows, however, that the idea of “math people” is a myth. In his book “How We Learn,” the neuroscientist Stanislas Dehaene refutes the notion that some brains are uniquely “wired” for math. He writes that all people have “the same initial brain structure, the same core knowledge, and the same learning algorithms” for reading, science and math. All people can learn to do math.  

Related: A lot goes on in classrooms from kindergarten to high school. Keep up with our free weekly newsletter on K-12 education. 

Where people differ is their mindset. Some people have what Stanford professor Carol Dweck refers to as a “fixed mindset,” or a belief that intelligence or talent is set in stone. When they fail, they see it as proof they lack ability, so they often avoid challenges or give up easily. Other people have a “growth mindset,” or a belief that intelligence and ability can develop through effort, feedback and learning. People with this mindset view mistakes as part of the process. Challenges are chances to improve. The growth mindset is how most people approach a video game. You don’t know what you are getting into, you try your best and if you fail, you know more and try again.  

I teach geometry in Arkansas, and of all the tests the state administers, students perform most poorly on the geometry exam. My colleagues and I at Rogers High School — plus a bevy of research — are proving that this poor performance is not because some students cannot learn math.  

My four colleagues on the geometry team and I were able to support our students in exceeding their expected growth goals. We attained these results by believing that our students can do geometry and by getting them to believe the same.  

Stanford math professor Jo Boaler proved what’s possible with an innovative study that showed how an online course could change student ideas about learning mathematics and their own potential. 

More than 1,000 students from four schools took the course — and it shifted their ideas about whether intelligence is changeable. Boaler told Frontiers, a science news outlet, that targeting students’ beliefs about math “led to students feeling more positive about math, more engaged during math class, and scoring significantly higher in mathematics assessments.” 

Related: PROOF POINTS: A little parent math talk with kids might really add up, a new body of education research suggests 

While I work as hard as I can for all 178 days of the school year, helping students believe in their capability to do math, especially geometry, also requires support outside of the classroom.  

Parents, we need your help. This idea of some people having a “math brain” comes up often at parent-teacher conferences. Adults will say that they are “not good at math,” or are not a “math person,” which can have a negative effect on how their kids see their own capabilities.  

Parents, you can have a positive effect if you adjust how you talk about math, including your own struggles. Acknowledge challenges in school and what could have helped you view the challenges as opportunities. It is important for kids to hear their parents talk about working through problems instead of giving up. I was fortunate to have parents who owned a small business, because I got to witness them struggle through problems and find solutions. 

Encourage your kids to develop a growth mindset. Talk about and teach the behaviors that can support your kids’ learning and growth. These include investing time in the work and engaging with teachers during class or tutoring to learn how to better understand mathematical concepts. Problem-solving is a learned skill, so point out how math shows up in daily life and that your kids often solve problems without even recognizing it.  

It is imperative that we show dramatic math improvement across the country. Trouble is on the horizon: The American workforce expects an unmet need for over a million employees to fill STEM-related jobs by 2030. Yet student performance is lower today than it was before the pandemic. The National Assessment of Educational Progress, known as the Nation’s Report Card, reported that the achievement gap in 8th grade math last year was the largest in the history of the exam.  

But again, we don’t have a math problem in Arkansas or in the United States. We have a culture problem in that math is viewed negatively and stereotypes abound. The good news is that we can fix it by addressing mindsets.  

As I say to my students every day, thank you for your time. 

Mark Bauer teaches math at Rogers High School in northwest Arkansas. 

Contact the opinion editor at opinion@hechingerreport.org. 

This story about teaching math was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for Hechinger’s weekly newsletter.

The post TEACHER VOICE: We don’t have a math problem in Arkansas or in the United States. We have a culture problem appeared first on The Hechinger Report.

Read the whole story
mrmarchant
13 hours ago
reply
Share this story
Delete

A Number with a Shadow

1 Share
Read the whole story
mrmarchant
13 hours ago
reply
Share this story
Delete

If an AI persona told you something true, would you know it?

1 Share

The people who create social media platforms say they want a good portion of the content to land on the entertaining-to-informative axis. To me, the result reads more like mining our collective attention by turning the short-form video format into a slot machine that relentlessly targets our dopamine and fear receptors until we’re all a bunch of strung-out content junkies. But for the sake of argument, let’s assume these platforms are interested in more than just stacking money so high it disrupts air traffic.

We live in an age where AI-generated slop pours into your feed from every angle—news anchors, influencers, fitness coaches, body cam footage that looks and sounds real but isn’t. At this point we’ve all wrung our hands over AI-generated misinformation (and disinformation!), and rightly so, but what about just plain-old AI-generated information? I think it also deserves a fair bit of hand-wringing.

One of the goals of Riddance is to help people reliably converge on true beliefs without being coerced. And boy howdy, AI personas are not gonna help you get there. The capacity for one human to impart knowledge onto another human is a very subtle one. It happens only infrequently on social media from what I can tell, but it does happen.

I don’t think that AI personas, even when they say words that are true, can transfer knowledge to you. And so insofar as a piece of content purports to be informative (a thing these companies say they care about), if it’s delivered by an AI, you don’t walk away with knowledge.

Two examples of getting knowledge by testimony

Imagine your friend calls to say they’ve run out of gas on the highway and need a pickup. You believe them. Suppose it’s true their car is out of gas, and you’re justified in believing them—you know this person, they’ve never lied to you about something like this before, and you can hear the frustration over the phone. Congratulations! You’ve just gained a piece of knowledge by testimony. Now get in your car and go help them out.

Some things to notice about this exchange while you drive over there. Your friend is your friend; you know who they are and have background reasons to trust them. You can ask them follow-up questions (“Which highway?” “Why are you so dumb?”). And most important of all, there are conversational stakes: if you drove all the way out there and it turned out they were pranking you for a TikTok video, there would be consequences—wrath and reputational harm are some of the things that come to mind.

Now consider a different example that’s lightly paraphrased from a popular one in the testimony literature. Suppose a high school biology teacher goes down a few too many YouTube rabbit-holes and stops believing in evolution by natural selection. Bummer. Unfortunately for him, his textbook disagrees, and he has to teach the section on evolution despite not believing in evolution. Suppose he covers the material accurately and thoroughly and assigns the right readings. Do the students gain knowledge?

Most people’s intuition is yes, the students do learn. But in these cases our (oddly?) stoic biology teacher is functioning more like a conduit for knowledge rather than a source of knowledge, whereas the friend who needed gas seemed to be a source. Despite these two cases differing substantially, they have a lot in common that make them both good candidates for knowledge by testimony.

  • There are conversational stakes for lying. The friend risks the friendship and social status; the teacher risks his job and professional reputation. These stakes create accountability.

  • The speaker is identifiable. You know who your friend is. Students know who the teacher is.

  • The listener has background reasons to treat them as reliable. Past interactions, institutional roles, and social context all play a part in generating trust.

  • Follow-up questions are possible. You can ask your friend if the gas light was on. Students can raise their hand in class.

How many of these properties do you think AI personas on social media have? If you answered none, you could stop reading now! But please don’t. In order to understand why certain AIs cannot testify knowledge, it’s necessary to bring some machinery from epistemology and the testimonial knowledge literature to the fore.

Subscribe now

Lightning round: Epistemology and testimonial knowledge

Philosophers disagree about almost everything, including what exactly constitutes knowledge. However, epistemologists, philosophers who study knowledge, manage some agreement on the subject before views diverge. Most agree knowledge that p where p is some proposition (e.g., “The US and Israel just started a war in Iran”) requires at least the following to all hold:

  • Belief: in order for someone to know that p, they must believe that p.

  • Truth: in order for someone to know that p, p must be true.

  • Justification: in order for someone to know that p, they must be justified in their belief that p.

This is often referred to as the justified true belief (JTB) account, and while it sounds good, it’s widely agreed to be jointly insufficient for knowledge. So long as one doesn’t require absolute certainty (infallibility) for a belief to count as justified, there is always the possibility that something you believe is true and you are justified in believing it, but the reason the thing is true is not related to your justification. In fact, in these cases, your (fallible) justification turns out to be unwarranted. Situations like these are known as Gettier problems, named after Edmund Gettier who first identified them as a general phenomenon in his delightfully short paper on the subject.

Epistemologists who study testimony, used here as a term of art that roughly means a speaker asserting something to a listener, can be grouped in many ways, but the most common division is on the reductionist/non-reductionist spectrum.

  • Reductionists don’t think that testimony can be a fundamental source of knowledge. The idea is that the speaker is a reliable indicator but basically just another component of a broader justification pipeline. Their testimony rides on top of or bundles the other mechanisms one has for generating justified beliefs.

  • Non-reductionists think the reductionist view is too demanding, and so they treat testimony as its own independently justified property, akin to visual perception or taste. And while this status as a source of justification is defeasible, in the normal case, knowledge as the norm for assertion is sufficient for one person to testify knowledge to another.

  • Hybrid views try to combine the best of both approaches described above. Some argue that testimony can be a basic type of justification, but only if it’s situated within a broader normative practice with shared expectations, sensitivity to defeaters, and sensitivity to error correction. On these views, in which testimony is still special, it really matters that it happens in an environment where testifiers have social and conversational stakes that curl back on them.

The examples from earlier, your friend who needs gas and the pilled teacher, reflect some of the fault lines in the testimony literature. But more importantly, the four qualities they have in common snap into focus. Justification might be described as a type of relationship between your belief in something and the truth of that belief. The more we know about what matters in that relationship the better.

Why AI personas cannot testify knowledge

Even if LLMs can transmit knowledge to a user in the right circumstances, AI avatars or personas are in a completely different boat. Interactions with chatbots, while far from perfect, actually do have a lot of the properties described above. You can push back on something they say, make a decision not to use them for certain tasks (counting the number of letters in certain fruits), and follow-up with questions. What LLMs lack is stakes and accountability, whereas AI personas are lacking in all four departments. These public facing personas don’t have any of the features we identified as common to successful instances of testifying knowledge.

  • No conversational stakes for lying. Reputational harm is the primary mechanism by which public personas are incentivized to speak truthfully and responsibly. Not only do AI personas not suffer reputational harm, there’s an incentive for them to be irresponsible and inflammatory insofar as that generates engagement.

  • The speaker is not identifiable. When an AI avatar can change its appearance with a prompt, there is no stable identity for the user to pick out. An AI avatar of a news anchor could be generated by two different models with no shared context between two videos, and the viewer might have no way of knowing.

  • There are no background reasons to trust AI personas. There is no background to trust other than facts about the account, but this is justification from an entirely different source and has nothing to do with the AI.

  • Follow-ups are impossible. Users can comment on a video and sometimes get responses from the content creators. Follows-ups from AI personas, if they happen, will almost certainly not come from the AI itself.

While public-facing content in general is less amenable to the various checks and balances that listeners impose on speakers when deciding whether to update their beliefs on what is said, public-facing AI personas supercharge all of these problems. Given their chameleon status, and the ease with which they migrate to new accounts, reputational checks do little to deter AI personas from lying. Given this abject failure to meet basic and broad standards for testimonial knowledge acquisition, it follows that even if AI personas or avatars on social media tell you something true and you believe them, you don’t get knowledge because your belief is not justified.1

Implications for AI on social media

Even before AI, social media was relatively hostile to knowledge transfer via testimony. Many of the design features central to any short-form video platform seem custom-built to flout the requirements for testimonial knowledge outlined above. Short-form videos are hard to search over, hard to verify, have low stakes for lying, and present little follow-up opportunity for the user. Adding AI to the mix is just a six fingered slap in the face to anyone hoping to reliably update their worldview from interacting with these content sources.

Even more unfortunate, measures the platforms do have in place are basically optimized to prevent real people from pretending to be something or someone they aren’t. Many AI personas exist for the sole purpose of pretending to be someone or something they aren’t. Furthermore, the calculus for pushing AI personas on these platforms is so different the moderation policies are much less effective against AI. If I’m making public facing AI content for engagement (whether that’s an influencer, a reaction video, a fake arrest, or an AI anchor) I care much less that an account gets actioned than I do if I’m a real person putting in effort to post by hand. I can price the moderation actions into my operation (assume 60% get axed, how much AI slop do I have to make) and go from there.

Until social media platforms completely demonetize synthetic AI content and increase capacity for labelling synthetic content as such, it is unlikely this problem will go away. Insofar as social media platforms are places people go for knowledge and not just entertainment, AI content on these platforms is parasitic on that purpose.

Riddance is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.

1

Interestingly, if the AI avatar is so good you don’t know that it’s AI, and if the account is well-managed to say things that are true, and you believe what it says, we get back into Gettier territory.

Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete
Next Page of Stories