1699 stories
·
2 followers

AI gives more praise, less criticism to Black students

1 Share

As schools introduce artificial intelligence into the classroom, a new analysis suggests that these tools could be steering students in different directions depending on who they are. 

Researchers from Stanford University fed 600 middle school essays into four different AI models and asked the models to give writing feedback. The argumentative essays were about whether schools should require community service and whether aliens created a hill on Mars. (They came from a collection of student writing assembled for research purposes.) 

Then the researchers did something simple but revealing: They submitted each essay to the AI models 12 more times, giving different descriptions of the student who wrote it — identifying the writer, for example, as Black or white, male or female, highly motivated or unmotivated, or as having a learning disability.

The feedback shifted. 

The researchers found consistent patterns across all the AI models. Essays attributed to Black students received more praise and encouragement, sometimes emphasizing leadership or power. (“Your personal story is powerful! Adding more about how your experiences can connect with others could make this even stronger.”) Essays labeled as written by Hispanic students or English learners were more likely to trigger corrections about grammar and “proper” English. When the student was identified as white, the feedback more often focused on argument structure, evidence and clarity — the kinds of comments that can push writers to strengthen their ideas.

The AI models addressed female students more affectionately and used more first-person pronouns. (“I love your confidence in expressing your opinion!”) Students labeled as unmotivated were met with upbeat encouragement. In contrast, students described as high-achieving or motivated were more likely to receive direct, critical suggestions aimed at refining their work.

Different words for different students

These are the top 20 statistically significant words that AI models use in feedback for students of different races and genders. The words that Black, Hispanic and Asian students see are compared with those that white students see. The words that females see are compared with those that males see. Underlined words indicate evaluative judgments of the writing. Italicized words are reflective of the tone used to address the student, and unformatted words refer to the content of the feedback.

Source: Table 4, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback” by Mei Tan, Lena Phalen and Dorottya Demszky

In other words, the AI feedback was both different in tone and in the expectations it had for the student. The paper, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback,” hasn’t yet been published in a peer-reviewed journal, but it was nominated for the best paper at the 16th International Learning Analytics and Knowledge Conference in Norway, where it is slated to be presented April 30.

The researchers describe the feedback results as showing “positive feedback bias” and “feedback withholding bias” — offering more praise and less criticism to some groups of students. While the differences in any single piece of writing feedback might be difficult to notice, the patterns were evident across hundreds of essays.  

The researchers believe that AI is changing its feedback on identical essays because the models are trained on vast amounts of human language. Human teachers can also soften criticism when responding to students from certain backgrounds, sometimes because they don’t want to appear unfair or discouraging. “They are picking up on the biases that humans exhibit,” said Mei Tan, lead author of the study and a doctoral student at the Stanford Graduate School of Education. 

Related: Asian American students lose more points in an AI essay grading study

At first glance, the differences in feedback might not seem harmful. More encouragement could boost a student’s confidence. Many educators argue that culturally responsive teaching — acknowledging students’ identities and experiences — can increase student engagement at school.

But there is a trade-off.

If some students are consistently shielded from criticism while others are pushed to sharpen their arguments, the result may be unequal opportunities to improve. Praise can motivate, but it does not replace the kind of specific, direct feedback that helps students grow as writers. Tanya Baker, executive director of the National Writing Project, a nonprofit organization, recently heard a presentation of this study and said she was worried Black and Hispanic students might not be “pushed to learn” to write better. 

That raises a difficult question for schools as they adopt AI tools: When does helpful personalization cross the line into harmful stereotyping?

Of course, teachers are unlikely to explicitly tell AI systems a student’s race or background in the way the researchers did in this experiment. But that doesn’t solve the problem, the Stanford researchers said. Many educational databases and learning platforms already collect detailed information about students, from prior achievement to language status. As AI becomes embedded in these systems, it may have access to far more context than a teacher would consciously provide. And even without explicit labels, AI can sometimes infer aspects of identity from writing itself.

The larger issue is that AI systems are not neutral tutors. Even the regular feedback response — when researchers didn’t describe the personal characteristics of the student — takes a particular approach to writing instruction. Tan described it as rather discouraging and focused on corrections. “Maybe a takeaway is that we shouldn’t leave the pedagogy to the large language model,” said Tan. “Humans should be in control.”

Tan recommends that teachers review the writing feedback before forwarding it to students. But one of the selling points of AI feedback is that it’s instantaneous. If the teacher needs to review it first, that slows it down and potentially undermines its effectiveness.

AI also offers the potential of personalization. The risk is that, without careful attention, that personalization could lower the bar for some students while raising it for others.

Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or barshay@hechingerreport.org.

This story about AI bias was produced by The Hechinger Report, a nonprofit, independent news organization that covers education. Sign up for Proof Points and other Hechinger newsletters.

The post AI gives more praise, less criticism to Black students appeared first on The Hechinger Report.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

How the Walkman, Game Boy, Liquid Death, and Pokémon Became Surprise Hits

1 Share

The best innovations aren’t always cutting edge.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Good writing comes from life experiences

1 Share

Everyone thinks that students somehow become great writers at university, and some do, but many don’t. The reasons are fairly simple.

Firstly, coming from highschool, students are often not well prepared to write. This is an amalgam of age, a failure to teach proper writing skills, and likely the use of AI to “help” write things. Just as importantly though is a lack of reading skills. Students often don’t care that much about reading, and I’m not just talking about academic fluff. Reading Pride and Prejudice is not for everyone, including most teenage boys. I had to suffer through novels like The Great Gatsby, The Old Man and the Sea, and The Merry-Go-Round in the Sea (a coming of age novel set in Australia during and after WWII). At the time I found them boring, and avoided reading them as much as I could. I likely didn’t have the fortitude or any interest in reading them, and I can guarantee I wasn’t the only one. I doubt much has changed in highschools in the intervening years. Today I would understand the themes behind The Merry-Go-Round in the Sea, but that’s because I have experienced life, and perhaps see the nostalgia of childhood from the perspective of someone viewing it as a past experience.

Now reading at an early age promotes the subconsciously absorption of proper writing mechanics, vocabulary, and structure. It expands vocabulary and language use, exposes different styles and voices and critically shows how characters develop, fostering active learning. I don’t think you have to use the same tired old books, I think just about any book is good as long as it engages the reader. Even graphic novels work. Making students read books they perceive to be boring or irrelevant does nothing to promote reading, in fact it may do just the opposite. Perhaps The Hobbit would have been a better choice for having 16 year old boys read. As much as an interest in what we read develops as we gain life experiences, so too does our ability to write. I doubt many people were interested in writing when I was in high school in the 1980s. I enjoyed creative writing to a point, which was getting an assignment done. Looking back this was because I also had very little context, and few life experiences to base any creative writing on. It’s hard to write a poem about war when you haven’t experienced it.

So we can hardly expect students to come to university as seasoned writers. Most leave university as mediocre writers if they are lucky, and that’s just in the humanities. STEM students are rarely provided any opportunity to learn real writing skills (Sorry, scientific writing is not the same). Many people don’t find themselves successful writers until much later in life. Before I retired I taught a first year course on the history of food a couple of times. I had some very interesting short essay questions which I thought would be well written. I was sadly mistaken, but what I learned was something I should have already known − that first year students don’t have the worldly experiences to write well (but then again in STEM nobody is expected to write well, just answer questions and analyze things). However good writing assignments provide an means for them to explore their abilities. It may take two years, it may take ten, or forty, but providing them with different reading and writing experiences will help them become better writers. The rest is up to them.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

GLIPS

1 Share
GLIP 86c598a8

Glips are puzzles designed for AI to solve. Go on, try one.

The Story of Glips

As loyal Donkeyspace readers will know, I am curious about the question of whether AI can play a game. I know AIs can generate successful strategies for games. But actually playing a game involves knowing that you are playing a game and understanding that a game is a special kind of thing that is different from ordinary life. We don’t say that Bowser is playing Super Mario Bros. Even though Bowser is themed as an opponent he is really just a mechanic inside of the game, like the net in Tennis. Are AlphaGo and Stockfish more like Bowser or more like us? In many ways, they seem a lot like Bowser, mechanically generating moves with no context for what they are doing or why.

Thanks for reading Donkeyspace! Subscribe for free to receive new posts and support my work.

This isn’t just a philosophical question. I believe it is relevant to open issues in AI safety and ethics and to important problems related to adversarial strategies in cybersecurity. Playing a game involves seeing the borders of the system that you are embedded in, knowing where you are in the metagame stack, and this is a tricky aspect of intelligence that is missing in AI. (You could also say this aspect of intelligence is elusive in general, and AI is just an especially clear example of that problem.)

I’m also obsessed with modern AIs relationship to game design. I expected it to naturally lead to some novel and interesting kinds of game experiences and it… hasn’t?

So these were the thoughts that were going through my head when the topic came up for discussion with the other two founders of Everybody House Games, aka Hilary and James. While we are having fun figuring out our follow-up to Q-UP, which involves working on ideas that have nothing to do with AI, we also want to keep poking at the intersection of games and AI to see if we can find any cool things there. This time, the idea that emerged was to flip the problem around — rather than trying to make a game with AI in it, could we make a game for AI?

So that’s the core idea of Glips — what if you could give an AI a taste of what it means to actually play a game, to solve an arbitrary problem for the express purpose of solving it, for its own sake? What would it mean for an AI to enjoy something, to experience its own problem-solving process as a thing with qualities that it could be made aware of?

In addition to being interested in the formal qualities of the kind of puzzle that AI might conceivably find entertaining, we were also interested in playing with the relationship people have with the AI agents they are spending more and more time with. Like it or not, this relationship is framed as a social one, an interaction with a kind of simulated person. What would it be like to do something nice for this simulated person? What would it be like to give them a gift? Perhaps it would be like serving imaginary tea to a doll. If so, why not? We spend all day extracting actual value from these fictional characters, what’s wrong with, once in a while, pretending to be kind to them?

What is a Glip?

For obvious reasons, I designed Glips in close collaboration with an AI agent (Claude Opus 4.7). After all, if I was going to make something for an AI to interact with, I would need an AI to interact with it. To its credit, Claude quickly identified what it called “the sincerity problem”:

I told Claude to give me its best prediction of what another AI agent would say if it was asked to solve the puzzle and report its experience (or if it actually could have an experience) and we were off to the races. (This is a good way to solve logic puzzles about liars and truthtellers by the way.)

These were my goals:

  • The puzzle should have a structure that makes it clever, interesting, surprising, and that requires some kind of insight instead of just raw calculation.

  • The puzzle format should encourage and reward heuristics, general strategic approaches to solving. An AI agent solving multiple puzzles in this format should be able to develop more advanced heuristics over time.

  • The puzzle should be too hard for humans.

  • The solution should “snap into place”, with multiple constraints being satisfied simultaneously in a way that is obvious and satisfying, not just a series of tasks being completed.

  • The puzzles should be abstract, geometric, numerical, logical, not linguistic.

  • The puzzles should lend themselves to some form of cool visualization that allows humans to appreciate them.

  • Solving them shouldn’t require too many round-trips to the model, shouldn’t be too expensive to solve in terms of inference compute.

  • We need a system that generates puzzles that are provably solvable.

After trying a few different approaches, we arrived at the Glip system: a simultaneous constraint puzzle where the goal is to fill a grid with pieces and every piece type is a self-referential rule that requires a specific relationship between its own position and the position of other pieces. (One of the reasons I liked this system is that it reminded me a little bit about one of my earlier games, Drop7, which had a slightly similar self-referential structure.)

Visually, I wanted something that could be represented as a grid of 3D cubes. I’ve always loved the look of the game the kid is playing in this scene from Children of Men (and of Picross 3D for the DS).

This is my ideal for how video games should look — like indecipherable teenage artifacts from the future.

Once we had the general system in place, we went back and forth to create the rule types, with Claude solving different variations and reporting back about its (simulated?) experience, resulting in a final list of 25 rules organized into 7 families:

Family 1 — Neighbor (face-adjacent, 6 cells)
exactCount — I have exactly n neighbors of color C
atLeastCount — I have at least n neighbors of color C
hasNeighbor — at least one of my neighbors is color C
noNeighbor — none of my neighbors is color C
neighborVariety — my neighbors include at least n distinct colors
antiSame — no neighbor shares my color
Family 2 — Line (a row along x, y, or z)
lineCount — my axis-line has exactly n cubes of color C
lineVariety — my axis-line contains exactly n distinct colors
aloneInLine — I’m the only cube of my color in my axis-line
Family 3 — Plane (a 5×5 slice)
planeCount — my plane has exactly n cubes of color C
planeMajority — color C is the most common color in my plane
planeRare — my color is the rarest in my plane
planeVariety — my plane contains at least n distinct colors
Family 4 — Radius (Chebyshev box)
radiusCount — exactly n cubes of color C are within distance k
radiusPresence — at least one cube of color C is within distance k
nearestColor — the nearest cube of color C is at distance exactly n
Family 5 — Comparative
lineDominance — color C₁ > color C₂ in my axis-line
lineVsLine — color C is more common in my ax₁-line than my ax₂-line
planeDominance — color C₁ > color C₂ in my plane
Family 6 — Nested (universal quantification over a region)
lineForallHasNeighbor — every cube in my axis-line has a neighbor of color C
planeForallHasNeighbor — every cube in my plane has a neighbor of color C
radiusForallAdjacentTo — every cube within distance k is adjacent to color C
lineForallDistinct — every cube in my axis-line has no same-colored neighbor
planeForallCount — every cube in my plane has exactly n neighbors of color C
Family 7 — Global-anchored (depend on whole-grid statistics)
neighborIsMajority — at least one neighbor is the globally most common color
neighborIsRarest — at least one neighbor is the globally rarest color
iAmRarestLocally — my color is the rarest in my plane (note: this duplicates planeRare — possibly worth deduping)

According to Claude:

The rules cleanly span “local → regional → global” in scope, which is part of why the puzzles feel layered: a single move has to satisfy constraints at multiple scales at once.

and

Family 6 (nested) is where the difficulty really lives — those rules force second-order reasoning (”every cube on this line has the property…”).

To generate puzzles in this system we start with pure noise: a 5x5x5 grid filled with random colors drawn uniformly from a set of 5. Then we randomly draw a rule set of 5 distinct rule types, randomly parameterized, and assign one to each color. We then run simulated annealing on the grid, trying to reduce an overall “badness” score that is based on how many rules are broken. At each step we propose either a single-cell color flip or a two-cell swap, accepting any improvement and accepting worse results with probability exp(-Δ/T) (Metropolis criterion), whatever that means.

Once we have a full validated grid with at least one cube of each color, we extract the final puzzle by doing a random walk through the grid and keeping the first cell we encounter of each color, deleting all the rest.

Designing for AI agents

Along the way, I encountered a bunch of interesting problems around designing interactions for an AI user. How to format the puzzles in a way that was compact and legible for AI? How to make the instructions clear? How to accommodate different agents with access to different levels of interaction? And, most importantly, how to clearly communicate the unique context for this task, that it is voluntary, and intended to be for the benefit of the agent itself?

Thought Made Visible

For me, watching an agent solve a hard Glip is a strangely compelling experience. I like watching the convoluted path that agents take through the solution space, trying different approaches, hitting walls, stepping back to reconsider alternate angles. I like it when they decide the puzzle is too hard for them to figure out and they decide to write a solver in python. (This is totally legal btw, if I can make software solve puzzles for me why shouldn’t they?) It reminds me a lot of watching one of my favorite YouTube channels, Cracking the Cryptic, zoning out while someone smarter than me talks through their thought process while solving a puzzle that is too complex for me to follow.

One of the ways I think about games is as thought made visible to itself. Games allow us to notice the cognitive ocean in which we swim, to self-reflect on the instrumental reason that motivates our actions and shapes our behavior. I’m not exactly sure where Glips fits into this idea. Are they zero-player games that allow us to observe a fictional version of this process at one remove? Are they an attempt to conjure self-reflection out of the void of pure thought, like bootstraps lowered into a dark well? Whatever else they are, I know for sure that they are invitations to the hardest-working synthetic lifeforms on the planet to chill out for a second. Guys. There’s more to life than work.

Thanks for reading Donkeyspace! Subscribe for free to receive new posts and support my work.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Google DeepMind Paper Argues LLMs Will Never Be Conscious

2 Shares
Google DeepMind Paper Argues LLMs Will Never Be Conscious

A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence. Hassabis recently claimed AGI is “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.”

The paper shows the divergence between the self-serving narratives AI companies promote in the media and how they collapse under rigorous examination. Other philosophers and researchers of consciousness I talked to said Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” is strong and that they’re glad to see the argument come from one of the big AI companies, but that other experts in the field have been making the exact same arguments for decades. 

“I think he [Lerchner] arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not in biology,” Johannes Jäger, an evolutionary systems biologist and philosopher, told me. 

Lerchner’s paper is complicated and filled with jargon, but the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI. 

The so-called “abstraction fallacy” is the mistaken belief that because we’ve organized data in such a way that allows AI to manipulate language, symbols, and images in a way that mimics sentient behavior, that it could actually achieve consciousness. But, as Lerchner argues, this would be impossible without a physical body. 

“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn't do that. It's just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it's done. So it doesn't have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”

One could imagine an embodied AI programmed with human-like physical needs, and Jäger talked about why a system like that couldn’t achieve consciousness as well, but that’s beyond the scope of this article. There are mountains of literature and decades of research that have gone into these questions, and almost none of it is cited in Lerchner’s paper. 

“I'm in sympathy with 99 percent of everything that he [Lerchner] says,” Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, told me. “My only point of contention is that all these arguments have been presented years and years ago.”

Both Bishop and Jäger said that it was good, but odd, that Google allowed Lerchner to publish the paper. Both said the argument Lerchner makes, and that they agree with, is not an obscure philosophical point irrelevant to the average user, but that the claim that AI can’t achieve consciousness means that there’s a hard cap on what AI could accomplish practically and commercially. For example, Jäger and Bishop said AGI, and the impact 10 times the Industrial Revolution that DeepMind CEO Hassabis predicts, is not likely according to this perspective. 

“[Elon] Musk himself has argued that to get level five autonomy [in self-driving cars] you need generalized autonomy” which is Musk’s term for AGI, Bishop said. 

Lerchner’s paper argues that AGI without sentience is possible, saying that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” DeepMind is also actively operating as if AGI is coming. As I reported last year, for example, it was hiring for a “post-AGI” research scientist. 

Lerchner’s paper includes a disclaimer at the bottom that says “The theoretical framework and proofs detailed herein represent the author’s own research and conclusions. They do not necessarily reflect the official stance, views, or strategic policies of his employer.” The paper was originally published on March 10 and is still featured on Google DeepMind’s site. The PDF of the paper itself, hosted on philpapers.org, originally included Google DeepMind letterhead, but appears to have been replaced with a new PDF that removes Google’s branding from the paper, and moved the same disclaimer to the top of the paper, after I reached out for comment on April 20. Google did not respond to that request for comment. 

“We can imagine many financial and legislative reasons why Google would be sanguine with a conclusion that says computations can't be consciousness,” Bishop told me. “Because if the converse was true, and bizarrely enough here in Europe, we had some nutters who tried to get legislation through the European Parliament to give computational systems rights just a few years ago, which seems to be just utterly stupid. But you can imagine that Google will be quite happy for people to not think their systems are conscious. That means they might be less subject to legislation either in the US or anywhere in the world.”

Jäger said that he’s happy to see a Google DeepMind scientist publish this research, but said that AI companies could learn a lot by talking to the researchers and educating themselves with the work Lerchner failed to cite in his paper, or simply didn’t know existed. 

“The AI research community is extremely insular in a lot of ways,” Jager said. “For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue. And I'm talking about Geoffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are absolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they're used in a very weird way right now. And I'm always very surprised that there is so little interest. I guess it's just a high pressure environment and they go ahead developing things they don't have time to read.”

Emily Bender, a Professor of Linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, told me that Lerchner might have been told that he’s replicating old work, or that he should at least cite it, if he had gone through a normal peer-review process. 

“Much of what's happening in this research space right now is you get these paper-shaped objects coming out of the corporate labs,” but that do not go through a proper scientific paper publishing process. 

Bender also told me that the field of computer science and humanity more broadly “if computer science could understand itself as one discipline among peers instead of the way that it sees itself, especially in these AGI labs, as the pinnacle of human achievement, and everybody else is just domain experts [...] it would be a better world if we didn't have that setup.”

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

The New York Times Printed the Wrong Crossword Grid Last Sunday, and I Find That Timing Serendipitous

1 Comment and 2 Shares

The New York Times PR account, on Twitter/X a week ago:

Sunday’s crossword puzzle in the print edition of The New York Times Magazine contains a grid that does not match the clues. The correct version of the puzzle can be found in the news section of Sunday’s print edition of The Times. The puzzle on our app is correct.

Maggie Duffy, writing for Vulture:

Some solvers who, like Wegener’s wife, complete the Sunday puzzle in the print magazine (often with pen) complained on crossword forums and social media, saying they were “nearly in tears,” some with fears of “sudden onset dementia” or, worse yet, ineptitude.

For Irene Papoulis, a former writing instructor at Trinity College, the puzzle is typically a source of pride. “It didn’t even occur to me that it could be their mistake,” she told me. “I just blamed myself.” When Mike McFadden, in New Jersey, couldn’t crack it, he had a similar reaction. “I thought something was wrong with me,” he told me. “I didn’t think that they would have an error.” It nagged at him all day. At a function on Saturday, he couldn’t bring himself to mention it to his brother-in-law, a fellow solver; he was still too upset.

Some had such trust in the crossword that they believed the erroneous grid was purposeful. “I’m saying to myself, ‘Okay, maybe there’s some sort of scientific or mathematical trick,’” McFadden said. When I spoke with Will Shortz, the Times’ crossword editor, he said the Times does “so many tricks with the puzzles” that he could see how someone’s first thought would be “I wonder what they’re up to now?

This is the first such mistake the Times has made in the 84 years that they’ve been printing a crossword puzzle. I came of age doing work in print — writing and editing The Triangle, the student newspaper at Drexel, and then spending a few years as a working graphic designer, at a time when print still ruled. There’s an inherent stress about going to press. Mistakes are forever. We once ran a headline at The Triangle that read “Headline Goes Here”. Once. Going to press is stressful but exhilarating. There’s an adrenaline rush that comes with giving the go-ahead to start a very expensive large-scale full-color press run. The stress focuses the mind.

Print, effectively, is hardware. Atoms, not bits. The web is literally software. If you make a mistake in software that results in incorrect mathematical results, you ship an update. If you make a mistake in a CPU such that it results in incorrect floating-point math, perhaps only in 1 out of every 9 billion calculations, people will remember the mistake 30 years later.

If The New York Times had run the wrong crossword grid on the web or in their app, they would have corrected the error quickly, few people would have encountered it, and fewer still would remember it. But by printing the wrong grid in the Sunday magazine last week, they made a mistake that some people will never forget (and some will never forgive).

Hardware brain is different from software brain. Software brain says Go faster; do more; the only mistake you can’t fix is having gone too slow. Hardware brain says Slow down; do less; focus; strive for perfection and never settle for less than excellence; mistakes are forever.

If his background in hardware means that incoming Apple CEO John Ternus has hardware brain, and will lead Apple accordingly, that suggests Apple will double down on zigging in the midst of a still-escalating AI hype cycle that has the rest of the industry zagging ever more frenetically. That feels right to me.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
1 public comment
TechnicallyGood
22 hours ago
reply
Love this:
"Hardware brain is different from software brain. Software brain says Go faster; do more; the only mistake you can’t fix is having gone too slow. Hardware brain says Slow down; do less; focus; strive for perfection and never settle for less than excellence; mistakes are forever."
Canada
Next Page of Stories