
Artificial intelligence is trained on data. It will process billions of words of human text, countless images, and the inane, ridiculous questions of its human users. It will learn to write in the active voice most of the time, and to keep sentences under 200 characters. It will learn that dogs have four legs and the Sun is normally yellow. And it might learn that Lorraine Woodward of Ontario wants to know how to prevent the buildup of ear wax.
Most of what we feed into AI has been made by a human — human art, human text, human prompts. And so, it’s clear that AI will inherit the biases and prejudices of human intelligence. For example, a lot has been written about how “racist” and “sexist” AI is.
“Draw a picture of a doctor,” we might prompt.
AI whirrs through its stock catalogue, where 80% of its doctor images are white, male, and gray-haired. It creates the most likely image of a “doctor” the user requires. Is this a kind of racism and sexism? It certainly propagates both, but it’s not really the AI’s fault. It’s ours.
In this week’s Mini Philosophy interview, I spoke with anthropologist Christine Webb about human exceptionalism — “the belief that humans are the most superior or important entity in the Universe” — and how it leaks into our science, ethics, and, increasingly, our AI. Her worry isn’t just about the limits of artificial “intelligence,” but also how damaging it might turn out to be.
The value-free science full of values
A lot of Webb’s work, both in and outside of her book, Arrogant Ape, is directed at calling out “anthropocentrism” in science and technology. Webb has argued that human exceptionalism embedded in mainstream scientific practice has shaped what we study, how we study it, and what we conclude — even when science presents itself as value-free. As she put it in a paper with Kristin Andrews and Jonathan Birch, “values drive research questions, methodological choices, statistical interpretation, and the framing of results,” which means those values can “influence empirical knowledge as much as the data.”
Here are three examples:
Research questions: Animal welfare science often asks how to optimize productivity or “reduce stress” within farming systems, rather than what environments animals themselves would choose. A common study question might be, “What cage enrichment reduces feather-pecking in hens?” rather than, “Do hens prefer to live caged or uncaged at all?” The bias is that the first question assumes or glosses over the legitimacy of caging, tacitly accepting the human agricultural system as the baseline.
Methodological choices: In our interview, Webb pointed out that when comparing cognition between humans and other primates, researchers typically use human-designed tasks (touchscreens, puzzles, symbols). These setups often require fine motor control or familiarity with human artifacts. As a result, apes “underperform,” leading to the conclusion that humans are smarter. But the methods themselves privilege human-like skills. The test is designed for humans to win.
Statistical interpretation: Statistical “significance” thresholds (like p < 0.05) are used to declare that an effect exists or not, but these conventions were originally developed for tightly controlled laboratory and industrial experiments. In animal welfare studies, subtle behavioral changes — like shifts in grooming, gaze, or social spacing — may be dismissed as “not significant,” even though they reflect genuine distress or preference.
AI that looks just like us
When we think about artificial intelligence, we are really talking about artificial human intelligence. Large language models and machine learning technologies are built on human data, operate from human prompts, and give human-like responses.
AI research asks how to make systems useful for humans (more accurate, more personalized, more profitable), not how they might affect nonhuman or ecological systems in terms of energy use, resource extraction, or long-term environmental stability. The framing already presupposes anthropocentrism.
The language we use often betrays both anthropocentrism and anthropomorphism — where we imagine nonhuman things as behaving and thinking just like humans. For example, researchers often describe models as “hallucinating,” “reasoning,” or “aligning” — all metaphors that project human cognition onto statistical systems. The framing centers our self-image rather than the system’s actual operations.
But the most obvious example of anthropocentrism in AI is that the entire field is focused on building intelligence in the same way as humans. It involves neural networks, symbolic reasoning, and goal-directed behavior.
Lessons from a moss
Of course, this makes sense if we want a product that humans can interact with. It makes sense if we want to develop human technology, human medicine, and human progress. But Webb points out that AI research focuses disproportionately on the good of human intelligence while ignoring the problems of “environmental destruction, decay, and arrogance with how we deal with the environment.”
In our conversation, Webb gave an interesting alternative that I’d love to see as a science fiction short story one day. And that’s to imagine an AI with the intelligence of a moss. As Webb put it:
“Robin Wall Kimmerer writes a lot in her work about how mosses have been around for 500 million years or something like that, compared to humans’ meager, like, 200,000 years. And so if we were really interested in intelligence, in living well, and in evolutionary success, maybe we should turn to other forms of life like mosses, who’ve managed to do it for hundreds of millions of years and ask them for solutions to some of the ecological problems that we’re facing today.
And how would we use that intelligence? Well, mosses are amazing, because they survive not by outcompeting others, but by creating highly diverse, thriving environments for other species to survive in. Like, that’s how they survive: by creating these tight, multi-species communities. So that would be a great thing to learn from about.”
We needn’t abandon the entire AI project to see the reasoning in Webb’s argument. If AI is HI but bigger, then it will amplify both the good and the bad. The trade-offs will be on a different scale. And when we’re talking about superhuman-like intelligence, does that mean super destruction, super bugs, and super catastrophe? Because human intelligence does all that. If we’re wanting to build an intelligence that can change the world, should we include a few more members of that world first?
This article The bias that is holding AI back is featured on Big Think.