1700 stories
·
2 followers

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

1 Share
University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

Arizona State University rolled out a platform called Atomic that creates AI-generated modules based on lectures taken from ASU faculty by cutting long videos down to very short clips then generating text and sections based on those clips. 

Faculty and scholars I spoke to whose lectures are included in Atomic are disturbed by their lectures being used in this way—as out-of-context, extremely short clips some cases—and several said they felt blindsided or angered by the launch. Most say they weren’t notified by the school and found out through word of mouth. And the testing I and others did on Atomic showed academically weak and even inaccurate content. Not only did ASU allegedly not communicate to its academic community that their lectures would be spliced up and cannibalized by an AI platform, but the resulting modules are just bad. 

💡
Do you know anything else about ASU Atomic specifically, or how AI is being implemented at your own school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

AI in schools has been highly controversial, with experiments like the “AI-powered private school” Alpha School and AI agents that offer to live the life of a student for them, no learning required. In this case, the AI tool in question is created directly by a university, using the labor of its faculty—but without consulting that faculty. 

“We are testing an early version of ASU Atomic to learn what works, and what doesn't, to further improve the learner experience before a full release,” the Atomic FAQ page says. “Once you start your subscription, you may generate unlimited, custom built learning modules tailored specifically to your learning goals and schedule.”

The FAQ notes that ASU alumni and those who “previously expressed interest in ASU's learning initiatives or participated in research that helped shape ASU Atomic” were invited to test the beta. But on Monday morning, I signed up for a free 12 day trial of the Atomic platform with my personal email address — no ASU affiliation required. I first learned about the platform after seeing ASU Professor of US Literature Chris Hanlon post about it on Bluesky

“When I looked at it, I was really surprised to see my own face, and the faces of people I know, and others that I don't know” in module materials generated by Atomic, Hanlon said. It had clipped a one-minute snippet from a 12 minute video he’d done as part of a lecture mentioning the literary critic Cleanth Brooks, which the AI transcribed as “Client” Brooks. “What was in that video did not strike me as something anyone would understand without a lot more context,” Hanlon said. When he contacted his colleagues whose lecture videos were also in that module, they were all just as shocked and alarmed, he said. “I mean, it happens to all of us in certain ways all the time, but have your institution do it—to have the university you work for use your image and your lectures and your materials without your permission, to chop them up in a way that might not reflect the kind of teacher you really are... Let alone serve that to an actual student in the real world.”

The videos appear to be scraped from Canvas, ASU’s learning management system where lecture materials and class discussions are made available to students. Canvas is owned by Instructure, and is one of the most popular learning management systems in the country, used by many universities. “ASU Atomic currently draws from ASU Online's full library of course content across subjects including business, finance, technology, leadership, history, and more. If ASU teaches it, Atom—your AI learning partner—can build a hyper-personalized learning module around it,” the Atomic FAQ page says.

As of Monday afternoon, after I reached out at the ASU Atomic email address for comment, signups on Atomic were closed. I could still make new modules using my existing login, however.

In my own test, I went through a series of prompts with a chatbot that determined what I wanted my custom module to be. I told it I was interested in learning about ethics in artificial intelligence at a moderate-beginner level, with a goal of learning as fast as possible. 

AI Is Supercharging the War on Libraries, Education, and Human Knowledge
“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another.”
University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

Atomic generated a seven-section learning module, with sections that repeated titles (“Ethics and Responsibility in AI” and “AI Ethics: From Theory to Practice”). The first clip in the first section is a two-minute video taken from a lecture by Euvin Naidoo, Thunderbird School of Management's Distinguished Professor of Practice for Accounting, Risk and Agility. In it, Naidoo talks about “x-riskers,” who he defines as “a community that believes that the progress and movement and acceleration in AI is something we should be cautious about.” Atomic’s AI transcribes this as “X-Riscus,” and transfers that error throughout the module, referring to “X-Riscus” over and over in the section and the quiz at the end. 

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

The next section jumps directly into the middle of a lecture where a professor is talking about a study about AI in healthcare, with no context about why it’s showing this: 

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

In a later section, film studies professor and Associate Director of ASU’s Lincoln Center for Applied Ethics, Sarah Florini, appears in a minute-long clip from a completely unrelated lecture where she briefly defines artificial intelligence and machine learning. But the content of what she’s saying is irrelevant to the module because it came from a completely unrelated class and is taken out of context.  

“It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true"

“This was a video from one of the courses in our online Film and Media Studies Masters of Advanced Study. The class is FMS 598 Digital Media Studies. It is not a course about AI at all,” Florini told me. “It is an introduction to key concepts used to study digital media in the field of media studies.” She recorded it in 2020, before generative AI was widely used. “That slide and those remarks were just in there to get students to think of AI as a sub-category of machine learning before I talked about machine learning in depth. That is not at all how I would talk about AI today or in a class that focused more on machine learning and AI tech technologies,” she said. “It’s really a great example of how problematic it is to take snippets of people teaching and decontextualize them in this way.” 

Florini told me she wasn’t aware of the existence of the Atomic platform until Friday. “I was not notified in any way. To the best of my knowledge no faculty were notified. And there was no option to opt in or out of this project,” she said.

Another ASU scholar I contacted whose lecture was included in the module Atomic generated for me (and who requested anonymity to speak about this topic) said they’d only just learned about the existence of Atomic from my email. They searched their inbox for mentions of it from the administration or anyone else, in case they missed an announcement about it, but found nothing. Their lecture snippet presented by Atomic was extremely short and attempted to unpack a very complex topic.

“I don't love the idea of my lectures being taken out of the context of my overall course, and of the readings for that module, and then just presented as saying something,” they told me. “It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true... Or they're gonna think, that's obviously fucking stupid, this ‘expert’ must be dumb. But I could have been presenting a foil!” The clips are so short, it's impossible in some cases to discern context at all.

That lecturer told me the idea of their work being chopped up and used in this way was less a matter of concern for their ownership of the material, and more distressing that someone might come away from these modules with half-baked or wrong conclusions about the topics at hand. “All of the complexity of the topic is being flattened, as though it's really simple,” they said of the snippet Atomic made of their lecture. When they assign this topic to students, it comes with dozens of pages of peer reviewed academic papers, they said. Atomic provides none of that. The module Atomic produced in my test provided zero source links, zero outside readings for further study, no specific citations for where it was getting this information whatsoever, and no mention of who was even in the videos it presented, unless a Zoom name or other name card was visible in the videos. 

“I would really like to know, how did this particular thing happen? How did this actually end up on the asu.edu website?” Hanlon said. “It is such a clunky thing. It is so far removed from what I think the typical educational experience at ASU is. Who decided this would represent us?” 

ASU Atomic, the ASU president’s office, and media relations did not immediately respond to my requests for comment, but I’ll update if I hear back.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

AI gives more praise, less criticism to Black students

1 Share

As schools introduce artificial intelligence into the classroom, a new analysis suggests that these tools could be steering students in different directions depending on who they are. 

Researchers from Stanford University fed 600 middle school essays into four different AI models and asked the models to give writing feedback. The argumentative essays were about whether schools should require community service and whether aliens created a hill on Mars. (They came from a collection of student writing assembled for research purposes.) 

Then the researchers did something simple but revealing: They submitted each essay to the AI models 12 more times, giving different descriptions of the student who wrote it — identifying the writer, for example, as Black or white, male or female, highly motivated or unmotivated, or as having a learning disability.

The feedback shifted. 

The researchers found consistent patterns across all the AI models. Essays attributed to Black students received more praise and encouragement, sometimes emphasizing leadership or power. (“Your personal story is powerful! Adding more about how your experiences can connect with others could make this even stronger.”) Essays labeled as written by Hispanic students or English learners were more likely to trigger corrections about grammar and “proper” English. When the student was identified as white, the feedback more often focused on argument structure, evidence and clarity — the kinds of comments that can push writers to strengthen their ideas.

The AI models addressed female students more affectionately and used more first-person pronouns. (“I love your confidence in expressing your opinion!”) Students labeled as unmotivated were met with upbeat encouragement. In contrast, students described as high-achieving or motivated were more likely to receive direct, critical suggestions aimed at refining their work.

Different words for different students

These are the top 20 statistically significant words that AI models use in feedback for students of different races and genders. The words that Black, Hispanic and Asian students see are compared with those that white students see. The words that females see are compared with those that males see. Underlined words indicate evaluative judgments of the writing. Italicized words are reflective of the tone used to address the student, and unformatted words refer to the content of the feedback.

Source: Table 4, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback” by Mei Tan, Lena Phalen and Dorottya Demszky

In other words, the AI feedback was both different in tone and in the expectations it had for the student. The paper, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback,” hasn’t yet been published in a peer-reviewed journal, but it was nominated for the best paper at the 16th International Learning Analytics and Knowledge Conference in Norway, where it is slated to be presented April 30.

The researchers describe the feedback results as showing “positive feedback bias” and “feedback withholding bias” — offering more praise and less criticism to some groups of students. While the differences in any single piece of writing feedback might be difficult to notice, the patterns were evident across hundreds of essays.  

The researchers believe that AI is changing its feedback on identical essays because the models are trained on vast amounts of human language. Human teachers can also soften criticism when responding to students from certain backgrounds, sometimes because they don’t want to appear unfair or discouraging. “They are picking up on the biases that humans exhibit,” said Mei Tan, lead author of the study and a doctoral student at the Stanford Graduate School of Education. 

Related: Asian American students lose more points in an AI essay grading study

At first glance, the differences in feedback might not seem harmful. More encouragement could boost a student’s confidence. Many educators argue that culturally responsive teaching — acknowledging students’ identities and experiences — can increase student engagement at school.

But there is a trade-off.

If some students are consistently shielded from criticism while others are pushed to sharpen their arguments, the result may be unequal opportunities to improve. Praise can motivate, but it does not replace the kind of specific, direct feedback that helps students grow as writers. Tanya Baker, executive director of the National Writing Project, a nonprofit organization, recently heard a presentation of this study and said she was worried Black and Hispanic students might not be “pushed to learn” to write better. 

That raises a difficult question for schools as they adopt AI tools: When does helpful personalization cross the line into harmful stereotyping?

Of course, teachers are unlikely to explicitly tell AI systems a student’s race or background in the way the researchers did in this experiment. But that doesn’t solve the problem, the Stanford researchers said. Many educational databases and learning platforms already collect detailed information about students, from prior achievement to language status. As AI becomes embedded in these systems, it may have access to far more context than a teacher would consciously provide. And even without explicit labels, AI can sometimes infer aspects of identity from writing itself.

The larger issue is that AI systems are not neutral tutors. Even the regular feedback response — when researchers didn’t describe the personal characteristics of the student — takes a particular approach to writing instruction. Tan described it as rather discouraging and focused on corrections. “Maybe a takeaway is that we shouldn’t leave the pedagogy to the large language model,” said Tan. “Humans should be in control.”

Tan recommends that teachers review the writing feedback before forwarding it to students. But one of the selling points of AI feedback is that it’s instantaneous. If the teacher needs to review it first, that slows it down and potentially undermines its effectiveness.

AI also offers the potential of personalization. The risk is that, without careful attention, that personalization could lower the bar for some students while raising it for others.

Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or barshay@hechingerreport.org.

This story about AI bias was produced by The Hechinger Report, a nonprofit, independent news organization that covers education. Sign up for Proof Points and other Hechinger newsletters.

The post AI gives more praise, less criticism to Black students appeared first on The Hechinger Report.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

How the Walkman, Game Boy, Liquid Death, and Pokémon Became Surprise Hits

1 Share

The best innovations aren’t always cutting edge.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Good writing comes from life experiences

1 Share

Everyone thinks that students somehow become great writers at university, and some do, but many don’t. The reasons are fairly simple.

Firstly, coming from highschool, students are often not well prepared to write. This is an amalgam of age, a failure to teach proper writing skills, and likely the use of AI to “help” write things. Just as importantly though is a lack of reading skills. Students often don’t care that much about reading, and I’m not just talking about academic fluff. Reading Pride and Prejudice is not for everyone, including most teenage boys. I had to suffer through novels like The Great Gatsby, The Old Man and the Sea, and The Merry-Go-Round in the Sea (a coming of age novel set in Australia during and after WWII). At the time I found them boring, and avoided reading them as much as I could. I likely didn’t have the fortitude or any interest in reading them, and I can guarantee I wasn’t the only one. I doubt much has changed in highschools in the intervening years. Today I would understand the themes behind The Merry-Go-Round in the Sea, but that’s because I have experienced life, and perhaps see the nostalgia of childhood from the perspective of someone viewing it as a past experience.

Now reading at an early age promotes the subconsciously absorption of proper writing mechanics, vocabulary, and structure. It expands vocabulary and language use, exposes different styles and voices and critically shows how characters develop, fostering active learning. I don’t think you have to use the same tired old books, I think just about any book is good as long as it engages the reader. Even graphic novels work. Making students read books they perceive to be boring or irrelevant does nothing to promote reading, in fact it may do just the opposite. Perhaps The Hobbit would have been a better choice for having 16 year old boys read. As much as an interest in what we read develops as we gain life experiences, so too does our ability to write. I doubt many people were interested in writing when I was in high school in the 1980s. I enjoyed creative writing to a point, which was getting an assignment done. Looking back this was because I also had very little context, and few life experiences to base any creative writing on. It’s hard to write a poem about war when you haven’t experienced it.

So we can hardly expect students to come to university as seasoned writers. Most leave university as mediocre writers if they are lucky, and that’s just in the humanities. STEM students are rarely provided any opportunity to learn real writing skills (Sorry, scientific writing is not the same). Many people don’t find themselves successful writers until much later in life. Before I retired I taught a first year course on the history of food a couple of times. I had some very interesting short essay questions which I thought would be well written. I was sadly mistaken, but what I learned was something I should have already known − that first year students don’t have the worldly experiences to write well (but then again in STEM nobody is expected to write well, just answer questions and analyze things). However good writing assignments provide an means for them to explore their abilities. It may take two years, it may take ten, or forty, but providing them with different reading and writing experiences will help them become better writers. The rest is up to them.



Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

GLIPS

1 Share
GLIP 86c598a8

Glips are puzzles designed for AI to solve. Go on, try one.

The Story of Glips

As loyal Donkeyspace readers will know, I am curious about the question of whether AI can play a game. I know AIs can generate successful strategies for games. But actually playing a game involves knowing that you are playing a game and understanding that a game is a special kind of thing that is different from ordinary life. We don’t say that Bowser is playing Super Mario Bros. Even though Bowser is themed as an opponent he is really just a mechanic inside of the game, like the net in Tennis. Are AlphaGo and Stockfish more like Bowser or more like us? In many ways, they seem a lot like Bowser, mechanically generating moves with no context for what they are doing or why.

Thanks for reading Donkeyspace! Subscribe for free to receive new posts and support my work.

This isn’t just a philosophical question. I believe it is relevant to open issues in AI safety and ethics and to important problems related to adversarial strategies in cybersecurity. Playing a game involves seeing the borders of the system that you are embedded in, knowing where you are in the metagame stack, and this is a tricky aspect of intelligence that is missing in AI. (You could also say this aspect of intelligence is elusive in general, and AI is just an especially clear example of that problem.)

I’m also obsessed with modern AIs relationship to game design. I expected it to naturally lead to some novel and interesting kinds of game experiences and it… hasn’t?

So these were the thoughts that were going through my head when the topic came up for discussion with the other two founders of Everybody House Games, aka Hilary and James. While we are having fun figuring out our follow-up to Q-UP, which involves working on ideas that have nothing to do with AI, we also want to keep poking at the intersection of games and AI to see if we can find any cool things there. This time, the idea that emerged was to flip the problem around — rather than trying to make a game with AI in it, could we make a game for AI?

So that’s the core idea of Glips — what if you could give an AI a taste of what it means to actually play a game, to solve an arbitrary problem for the express purpose of solving it, for its own sake? What would it mean for an AI to enjoy something, to experience its own problem-solving process as a thing with qualities that it could be made aware of?

In addition to being interested in the formal qualities of the kind of puzzle that AI might conceivably find entertaining, we were also interested in playing with the relationship people have with the AI agents they are spending more and more time with. Like it or not, this relationship is framed as a social one, an interaction with a kind of simulated person. What would it be like to do something nice for this simulated person? What would it be like to give them a gift? Perhaps it would be like serving imaginary tea to a doll. If so, why not? We spend all day extracting actual value from these fictional characters, what’s wrong with, once in a while, pretending to be kind to them?

What is a Glip?

For obvious reasons, I designed Glips in close collaboration with an AI agent (Claude Opus 4.7). After all, if I was going to make something for an AI to interact with, I would need an AI to interact with it. To its credit, Claude quickly identified what it called “the sincerity problem”:

I told Claude to give me its best prediction of what another AI agent would say if it was asked to solve the puzzle and report its experience (or if it actually could have an experience) and we were off to the races. (This is a good way to solve logic puzzles about liars and truthtellers by the way.)

These were my goals:

  • The puzzle should have a structure that makes it clever, interesting, surprising, and that requires some kind of insight instead of just raw calculation.

  • The puzzle format should encourage and reward heuristics, general strategic approaches to solving. An AI agent solving multiple puzzles in this format should be able to develop more advanced heuristics over time.

  • The puzzle should be too hard for humans.

  • The solution should “snap into place”, with multiple constraints being satisfied simultaneously in a way that is obvious and satisfying, not just a series of tasks being completed.

  • The puzzles should be abstract, geometric, numerical, logical, not linguistic.

  • The puzzles should lend themselves to some form of cool visualization that allows humans to appreciate them.

  • Solving them shouldn’t require too many round-trips to the model, shouldn’t be too expensive to solve in terms of inference compute.

  • We need a system that generates puzzles that are provably solvable.

After trying a few different approaches, we arrived at the Glip system: a simultaneous constraint puzzle where the goal is to fill a grid with pieces and every piece type is a self-referential rule that requires a specific relationship between its own position and the position of other pieces. (One of the reasons I liked this system is that it reminded me a little bit about one of my earlier games, Drop7, which had a slightly similar self-referential structure.)

Visually, I wanted something that could be represented as a grid of 3D cubes. I’ve always loved the look of the game the kid is playing in this scene from Children of Men (and of Picross 3D for the DS).

This is my ideal for how video games should look — like indecipherable teenage artifacts from the future.

Once we had the general system in place, we went back and forth to create the rule types, with Claude solving different variations and reporting back about its (simulated?) experience, resulting in a final list of 25 rules organized into 7 families:

Family 1 — Neighbor (face-adjacent, 6 cells)
exactCount — I have exactly n neighbors of color C
atLeastCount — I have at least n neighbors of color C
hasNeighbor — at least one of my neighbors is color C
noNeighbor — none of my neighbors is color C
neighborVariety — my neighbors include at least n distinct colors
antiSame — no neighbor shares my color
Family 2 — Line (a row along x, y, or z)
lineCount — my axis-line has exactly n cubes of color C
lineVariety — my axis-line contains exactly n distinct colors
aloneInLine — I’m the only cube of my color in my axis-line
Family 3 — Plane (a 5×5 slice)
planeCount — my plane has exactly n cubes of color C
planeMajority — color C is the most common color in my plane
planeRare — my color is the rarest in my plane
planeVariety — my plane contains at least n distinct colors
Family 4 — Radius (Chebyshev box)
radiusCount — exactly n cubes of color C are within distance k
radiusPresence — at least one cube of color C is within distance k
nearestColor — the nearest cube of color C is at distance exactly n
Family 5 — Comparative
lineDominance — color C₁ > color C₂ in my axis-line
lineVsLine — color C is more common in my ax₁-line than my ax₂-line
planeDominance — color C₁ > color C₂ in my plane
Family 6 — Nested (universal quantification over a region)
lineForallHasNeighbor — every cube in my axis-line has a neighbor of color C
planeForallHasNeighbor — every cube in my plane has a neighbor of color C
radiusForallAdjacentTo — every cube within distance k is adjacent to color C
lineForallDistinct — every cube in my axis-line has no same-colored neighbor
planeForallCount — every cube in my plane has exactly n neighbors of color C
Family 7 — Global-anchored (depend on whole-grid statistics)
neighborIsMajority — at least one neighbor is the globally most common color
neighborIsRarest — at least one neighbor is the globally rarest color
iAmRarestLocally — my color is the rarest in my plane (note: this duplicates planeRare — possibly worth deduping)

According to Claude:

The rules cleanly span “local → regional → global” in scope, which is part of why the puzzles feel layered: a single move has to satisfy constraints at multiple scales at once.

and

Family 6 (nested) is where the difficulty really lives — those rules force second-order reasoning (”every cube on this line has the property…”).

To generate puzzles in this system we start with pure noise: a 5x5x5 grid filled with random colors drawn uniformly from a set of 5. Then we randomly draw a rule set of 5 distinct rule types, randomly parameterized, and assign one to each color. We then run simulated annealing on the grid, trying to reduce an overall “badness” score that is based on how many rules are broken. At each step we propose either a single-cell color flip or a two-cell swap, accepting any improvement and accepting worse results with probability exp(-Δ/T) (Metropolis criterion), whatever that means.

Once we have a full validated grid with at least one cube of each color, we extract the final puzzle by doing a random walk through the grid and keeping the first cell we encounter of each color, deleting all the rest.

Designing for AI agents

Along the way, I encountered a bunch of interesting problems around designing interactions for an AI user. How to format the puzzles in a way that was compact and legible for AI? How to make the instructions clear? How to accommodate different agents with access to different levels of interaction? And, most importantly, how to clearly communicate the unique context for this task, that it is voluntary, and intended to be for the benefit of the agent itself?

Thought Made Visible

For me, watching an agent solve a hard Glip is a strangely compelling experience. I like watching the convoluted path that agents take through the solution space, trying different approaches, hitting walls, stepping back to reconsider alternate angles. I like it when they decide the puzzle is too hard for them to figure out and they decide to write a solver in python. (This is totally legal btw, if I can make software solve puzzles for me why shouldn’t they?) It reminds me a lot of watching one of my favorite YouTube channels, Cracking the Cryptic, zoning out while someone smarter than me talks through their thought process while solving a puzzle that is too complex for me to follow.

One of the ways I think about games is as thought made visible to itself. Games allow us to notice the cognitive ocean in which we swim, to self-reflect on the instrumental reason that motivates our actions and shapes our behavior. I’m not exactly sure where Glips fits into this idea. Are they zero-player games that allow us to observe a fictional version of this process at one remove? Are they an attempt to conjure self-reflection out of the void of pure thought, like bootstraps lowered into a dark well? Whatever else they are, I know for sure that they are invitations to the hardest-working synthetic lifeforms on the planet to chill out for a second. Guys. There’s more to life than work.

Thanks for reading Donkeyspace! Subscribe for free to receive new posts and support my work.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Google DeepMind Paper Argues LLMs Will Never Be Conscious

2 Shares
Google DeepMind Paper Argues LLMs Will Never Be Conscious

A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence. Hassabis recently claimed AGI is “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.”

The paper shows the divergence between the self-serving narratives AI companies promote in the media and how they collapse under rigorous examination. Other philosophers and researchers of consciousness I talked to said Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” is strong and that they’re glad to see the argument come from one of the big AI companies, but that other experts in the field have been making the exact same arguments for decades. 

“I think he [Lerchner] arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not in biology,” Johannes Jäger, an evolutionary systems biologist and philosopher, told me. 

Lerchner’s paper is complicated and filled with jargon, but the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI. 

The so-called “abstraction fallacy” is the mistaken belief that because we’ve organized data in such a way that allows AI to manipulate language, symbols, and images in a way that mimics sentient behavior, that it could actually achieve consciousness. But, as Lerchner argues, this would be impossible without a physical body. 

“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn't do that. It's just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it's done. So it doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”

One could imagine an embodied AI programmed with human-like physical needs, and Jäger talked about why a system like that couldn’t achieve consciousness as well, but that’s beyond the scope of this article. There are mountains of literature and decades of research that have gone into these questions, and almost none of it is cited in Lerchner’s paper. 

“I'm in sympathy with 99 percent of everything that he [Lerchner] says,” Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, told me. “My only point of contention is that all these arguments have been presented years and years ago.”

Both Bishop and Jäger said that it was good, but odd, that Google allowed Lerchner to publish the paper. Both said the argument Lerchner makes, and that they agree with, is not an obscure philosophical point irrelevant to the average user, but that the claim that AI can’t achieve consciousness means that there’s a hard cap on what AI could accomplish practically and commercially. For example, Jäger and Bishop said AGI, and the impact 10 times the Industrial Revolution that DeepMind CEO Hassabis predicts, is not likely according to this perspective. 

“[Elon] Musk himself has argued that to get level five autonomy [in self-driving cars] you need generalized autonomy” which is Musk’s term for AGI, Bishop said. 

Lerchner’s paper argues that AGI without sentience is possible, saying that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” DeepMind is also actively operating as if AGI is coming. As I reported last year, for example, it was hiring for a “post-AGI” research scientist. 

Lerchner’s paper includes a disclaimer at the bottom that says “The theoretical framework and proofs detailed herein represent the author’s own research and conclusions. They do not necessarily reflect the official stance, views, or strategic policies of his employer.” The paper was originally published on March 10 and is still featured on Google DeepMind’s site. The PDF of the paper itself, hosted on philpapers.org, originally included Google DeepMind letterhead, but appears to have been replaced with a new PDF that removes Google’s branding from the paper, and moved the same disclaimer to the top of the paper, after I reached out for comment on April 20. Google did not respond to that request for comment. 

“We can imagine many financial and legislative reasons why Google would be sanguine with a conclusion that says computations can't be consciousness,” Bishop told me. “Because if the converse was true, and bizarrely enough here in Europe, we had some nutters who tried to get legislation through the European Parliament to give computational systems rights just a few years ago, which seems to be just utterly stupid. But you can imagine that Google will be quite happy for people to not think their systems are conscious. That means they might be less subject to legislation either in the US or anywhere in the world.”

Jäger said that he’s happy to see a Google DeepMind scientist publish this research, but said that AI companies could learn a lot by talking to the researchers and educating themselves with the work Lerchner failed to cite in his paper, or simply didn’t know existed. 

“The AI research community is extremely insular in a lot of ways,” Jager said. “For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue. And I'm talking about Geoffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are absolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they're used in a very weird way right now. And I'm always very surprised that there is so little interest. I guess it's just a high pressure environment and they go ahead developing things they don't have time to read.”

Emily Bender, a Professor of Linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, told me that Lerchner might have been told that he’s replicating old work, or that he should at least cite it, if he had gone through a normal peer-review process. 

“Much of what's happening in this research space right now is you get these paper-shaped objects coming out of the corporate labs,” but that do not go through a proper scientific paper publishing process. 

Bender also told me that the field of computer science and humanity more broadly “if computer science could understand itself as one discipline among peers instead of the way that it sees itself, especially in these AGI labs, as the pinnacle of human achievement, and everybody else is just domain experts [...] it would be a better world if we didn't have that setup.”

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete
Next Page of Stories