384 stories
·
0 followers

How the U.S. Public and AI Experts View Artificial Intelligence

1 Share

These groups are far apart in their enthusiasm and predictions for AI, but both want more personal control and worry about too little regulation.

The post How the U.S. Public and AI Experts View Artificial Intelligence appeared first on Pew Research Center.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

The Crisis of Zombie Social Science

1 Share

Thank you for reading The Garden of Forking Paths. This edition is free for everyone, but I rely exclusively on reader support, so please consider upgrading to a paid subscription for just $4/month to support my work. Alternatively, consider checking out my book FLUKE, which discusses some of these ideas at greater length.

Subscribe now


I: Guinea Pigs with No Control Group

Imagine that two rival scientists announce that they’ve each invented a new miracle medicine. Pop just one pill, they claim, and the likelihood that you’ll get sick in the next unforeseen pandemic—whatever it may be—is drastically reduced. But there’s a problem: nobody knows when the next pandemic will strike, so how can you test whether these miracle pills work or not?

After much bickering, a compromise is reached: each scientist will make their case to the population, giving a rousing speech while trying to convince them that their pill is best. At the end of the debate, the population will vote on which pill to take. Everyone will be required to take a dose of the winning pill—and then wait four years to see what happens. At the end of that four-year period, the scientists will return, again try to persuade the population to take their pill, at which point everyone will either take a second dose or try a different medicine altogether, from a more convincing rival scientist.

Some people will get sick, some won’t, but because everyone is taking the same pill and there’s no control group or rigorous testing—and because nobody knows precisely which illness the pill is supposed to prevent in the first place—it’s unclear whether it “worked” or not. Because of this uncertainty, people start to sort themselves into medicinal tribes, voraciously consuming news that affirms their belief in their favored pill. Amidst all this pharmaceutical chaos, there does seem to be one clear pattern in this strange society: if it “feels” like a lot of people happen to be sick at the end of the four-year period, then the population will try a different pill.

Surely, nobody would agree to this absurd strategy. Year after year, citizens would be guinea pigs, but without every getting closer to discovering what medicine works and what doesn’t. After all, few would willingly take a vaccine based solely on a scientist’s ideology that it should work or decide which medicines to take based on how silver-tongued a doctor might be.

And yet, this is exactly how we run society. Instead of rival scientists, we turn over humanity’s most important decisions to ideologue politicians who simply debate what to do with the economy, health care, war, poverty, immigration, and climate change based on what they think might work. Then, after four years, in an infinitely complex world in which a million variables change, nothing is held constant, and there are no control groups, we subjectively decide whether it worked. If enough people think it didn’t, we collectively decide that we’d like to follow a different ideologue’s plan instead.

This is, quite clearly, insane.

The remedy to the insanity of ideology-based guesswork, as we’ve figured out with scientific research such as medicine trials, is rigorous testing that definitively proves what works and what doesn’t. With social research, for reasons we shall soon explore, that approach is often impossible. Studying human society is never so simple. After all, eight billion interacting human brains are, without question, the most complex entity in the known universe.

No matter how much we try, understanding ourselves proves elusive. The chaos of human society is shrouded in unpartable clouds of mystery.

So, what we do instead is this: we put a lot of clever people into universities, where disproportionately elbow-patched boffins come up with sophisticated theories and arcane models that provide better guesses about what will and will not work based on past patterns. They toil endlessly, trying to get slightly better at understanding a social world that is impossible to understand. When one of them comes up with a theory that seems pretty good at describing what happened in the past, they take it to an annual gathering of other boffins who scratch their chins, ask rude questions, and then retire to their hotel rooms.1

Subsequently, the theory might get published in a journal so expensive that normal people can’t access it. The research will be read by a small number of other clever people in universities, at which point the ideologues governing society will diligently fail to read about the evidence or pay any attention to it unless it confirms what they already believed about the world. If the theory and the evidence happen to conform to their prevailing ideology, they will then enthusiastically embrace the findings after reading a staffer’s summary. The politician will then show their appreciation for the research by misrepresenting it to the public.

Shockingly, this seemingly foolproof strategy appears to not be working.

I am a disillusioned social scientist, critical of my own discipline, but eminently aware that fully understanding human behavior is a nearly impossible task. As I wrote in Fluke, we’re making a mistake when we use the phrase “it’s not rocket science.” We should be saying “it’s not social science”:

“In 2004, humans launched a spacecraft that traveled for ten years before softly touching down on a comet two and a half miles wide that was traveling at eighty-four thousand miles per hour. Every calculation had to be perfect—and it was. Conversely, trying to figure out, with certainty, whether Thailand’s economy will grow or contract in the next six months or whether inflation in Britain will be above 5 percent three years from now, well, that’s just not something we can do.”

Sometimes, despite an astonishingly abysmal track record, we keep using the same tired, failed tools to try to decide what to do. Aside from the ridiculousness of turning to the same pundits, no matter whether they’ve been prescient or grotesquely wrong, we continue to double-down on failed methods of social research and forecasting. In 2016, The Economist magazine conducted an analysis of IMF economic forecasts that covered 189 countries over a period of fifteen years. During that time, a country entered a recession 220 times. How many times did the IMF April forecasts correctly predict the recession? Zero.

Part of the source of this trouble is a straightforward issue that’s extremely difficult to solve. I call it the Problem of Zombie Research—in which bad theories never die. That problem, alas, is endemic to the social sciences.

The economists Wynne Godley and David Evans summed up the issue:

Evans: What actually does resolve disputes in economics?

Godley: Nothing!

Evans: They just go on…well they certainly seem to.

Godley: Successful rhetoric is what resolves issues.

Like the people in the absurdist pill-popping society imagined above, we are largely unable to falsify proposed explanations about how our world works. We therefore find ourselves unable to definitively prove which theories are golden and which are garbage.2 And when that happens, social science can sometimes end up becoming a caricature of itself, in which clever people play with increasingly sophisticated models but don’t provide a roadmap to solving real-world problems.

We need good social science. It is the only tool we have to make the world better based on evidence rather than ideology. But to escape the absurdism of our current situation, we must do two things better:

  1. Be ruthlessly clear about what social science is for (solving problems to mitigate avoidable harm, not just identifying an elegant single causal explanation for past variation in flawed data).3

  2. Get better at slaying Zombie Theories by making (often wrong) predictions.

II: The Easy Problem of Social Research

A little over a decade ago, a renowned researcher named Daryl Bem produced compelling evidence that he had discovered proof of extrasensory perception, or ESP. His findings, which passed peer review and standard methodology checks, were published in a top psychology journal.

But when some other researchers thought that Bem’s findings didn’t pass the smell test, they took matters into their own hands: they tried to replicate his findings by repeating his experiments.

They couldn’t. The findings were bogus—statistical correlations that “discovered” a phantom effect with no basis in empirical reality. But when the scientists exposing Bem’s studies tried to get their findings published to expose the bad research on precognition, nobody wanted to publish an article that was re-treading on old ground. Worse still, one of their journal rejections came after a peer reviewer trashed the replication studies. That reviewer’s name? Daryl Bem.

Eventually, Bem’s findings were thoroughly debunked. This saga, fueled by other high-profile examples of bogus research such as the viral “power pose” studies, launched the replication crisis, in which it soon became clear that many significant findings in social research (and some in medicine) could not be reproduced in subsequent repeats that used the same methodologies.

There are many reasons for these well-known crises of confidence in social research, including p-hacking, the file drawer problem, the McNamara Fallacy, measurement error, category mistakes, and manipulated or invented data, to name but a few. There are also serious concerns with the peer review process, which often fails to catch even the most egregious errors. (One study deliberately planted serious flaws in research papers and then sent them out for peer review. Reviewers, on average, detected only 2.6 out of 9 serious planted mistakes).

I found this image courtesy of this post from Adam Mastroianni.

All of these problems deserve serious attention, but they are extremely fixable. They are basically implementation problems, which do not pose any philosophical challenges to what we can and cannot know about our world. That’s why I bunch them together and refer to them collectively as forming The Easy Problem of Social Research.4 With better methods, shrewd adjustments to detect manipulated research, and peer review reform, these problems can be solved—and some bogus theories can be killed off.

But there’s a deeper crisis being ignored—one that’s more fundamental to the nature of what we can and cannot know about human society.

III: The Hard Problem of Social Research

In 2022, a brilliant study was published that should have rocked the foundations of social science to their core—triggering a much more profound research crisis.

A team of researchers led by Nate Breznau decided to tackle a thorny question that plagues modern politics: do higher levels of immigration reduce public support for social safety net programs? Let’s consider three hypotheses:

  1. Because of xenophobia, more immigrants mean that native-born citizens will decrease support for tax dollars spent on social safety net programs.

  2. Because of social generosity, higher immigration will increase support for social safety net programs to help integrate the less fortunate.

  3. Immigration won’t really affect support for social spending much either way.

This question is both interesting and important; a definitive answer would help inform public policy debates across the democratic world. So, which is correct?

To find out, Breznau and his colleagues did something clever: they asked for a large number of volunteer research teams to try to answer the question as best they could using the exact same data. In total, 161 social scientists working on 73 independent research teams did their best to try to find the “right” answer to this seemingly straightforward question.

What happened next should bewilder every social scientist—and the public—and it was far bigger than any question about immigration.

The 73 teams produced a completely mixed result. A little over half of the teams found no effect—that immigration levels didn’t seem to move the needle much in either direction. About a quarter of the research teams found a significant negative effect. And just under a fifth of the research teams found a significant positive effect. The results, absurdly, followed a somewhat normal distribution.

Breznau’s team controlled for virtually everything: they gave the research teams the same data, the same question, and made all the research teams catalogue every methodological decision. Despite those commonalities, tiny, seemingly insignificant choices led to wildly different findings.

Poring over the various findings, Breznau’s team could only figure out what contributed to five percent of the variation between the research teams; the other 95 percent was inexplicable. As they put it, fittingly flummoxed by the results: “Even the most seemingly minute [methodological] decisions could drive results in different directions; and only awareness of these minutiae could lead to productive theoretical discussions or empirical tests of their legitimacy.”

They billed the problem appropriately: we live in a universe of uncertainty. Such unresolvable uncertainty is an epistemological challenge to what it is possible to definitively know about our social world, what I call The Hard Problem of Social Research.

Here’s why it’s a big problem: for 99.9 percent of social science studies, there are not 73 research teams working on the same question with the same data. Instead, one researcher takes their best shot at a problem and comes up with their best answer. In those situations, the researcher gets to decide what question to ask, which data to use, how to measure phenomena of interest, how to categorize variables, what data analysis strategy fits best, which results to report, and how to frame a finding.

If Breznau’s team had asked only one research team to answer the immigration question, there would be a nearly equal chance that they would find either that higher immigration increased support for social spending or decreased it. Once published, either a positive or a negative result might be considered a settled question. The irreducible uncertainty would be hidden from view.

This raises the unsettling follow-up question: how many seemingly “solid” past social science research findings would be more like the universe of uncertainty debacle if they were farmed out to 73 separate research teams? Nobody knows.

This is a much bigger problem than the replication crisis. It challenges the most basic assumptions of social research.5

The astonishing variance in the “Universe of Uncertainty” paper also points to the core dilemma raised above: how do you definitively reject social theories in a world that is infinitely complex and therefore so difficult to accurately study? In physics or chemistry, if a predicted result is off by a millionth of a percentage point, the entire theory can be rejected outright. Perfect precision is required.

But in social research, if a model is able to “explain,” say, 60 percent of the variation in the data, then it’s considered an incredibly strong result. That’s because the social complexity of billions of interacting, self-aware, conscious human agents embedded in ever-changing social systems is far harder to model than even the most unruly molecules. Precision is impossible.

Worse still, unlike with molecules, social research is fragile and context-dependent. If a caveman anywhere in the world mixed baking soda and vinegar together, it would fizz at precisely the same rate as today. But if the exact same virus that sparked the covid-19 pandemic had infected someone in 1990 instead of 2020, everything would have been different. In 1990, China was less connected to the world. George H.W. Bush, not Donald Trump, would have handled the US response. And most crucially, without widespread digital technology, working from home would have been impossible, so the economic effect of an identical virus would have been radically different. In social research, everything matters.

Moreover, with probability estimates, it’s extremely difficult to verify theories with one-shot events. Claiming that X makes Y more likely isn’t possible to prove or disprove if an event only happens one time, but many of the most important events are one-offs. Nate Silver’s claim that Hillary Clinton had a 71.4 percent chance of victory in 2016 could never be proven “wrong,” because when she lost, Silver could just say that the less likely outcome occurred.

The upshot is this: because the world is so maddeningly complex and because our models are so imprecise as a result, there’s plenty of latitude for researchers, politicians, and the public to pick and choose their preferred theories. Choose your own social science explanation!

This dynamic differs from “hard” science. Only delusional crackpots think that the sun revolves around the Earth, but after decades of research and mountains of evidence, there’s still no unequivocal agreement about the precise economic effect of cutting taxes on the rich. And that’s with a question that’s actually pretty clear-cut when it comes to the available evidence! (Hint: tax cuts for the rich mainly benefit the rich and don’t “trickle down”). But most social research on the biggest questions we face doesn’t produce such clear-cut, consistent answers.

One of the more recent strategies to parry these critiques has been to dress up the flaws, adorning some deeply uncertain dynamics with really sophisticated looking model garb, in the hope that nobody notices because the equation looks really impressive and precise. These are what I call The Emperor’s New Equations.6 For example, as I highlighted in Fluke, here’s an actual equation from a recent unnamed political science paper that gives the exact mathematical formula for whether a given person will join a rebel movement during a civil war:

And yet, when the boffins get together to scratch their chins and exchange equations like these on PowerPoint slides, everyone is afraid to say what’s obvious. So, I’ll bite the bullet and be the bad guy here:

This is all, quite clearly, silly.

IV: The Case for (Bad) Predictions

So, how good are we currently at predicting social outcomes? Consider the Fragile Families study, which tracked five thousand families, each with a child born to unmarried parents. Data about the children was collected at ages one, three, five, nine, fifteen, and twenty-two, making the study one of the richest collections of rigorous, detailed data ever conducted. Then, as I wrote previously:

“After the data from the children who had turned fifteen was collected, it wasn’t released. Instead, the researchers held a competition, in which they gave competing teams of scientists access to the data from the children at ages one, three, five, and nine. The challenge was to see who could best predict life outcomes for the children now that they were fifteen years old. Because the researchers already had the real-world outcomes, they could see how well the teams had done relative to reality. The teams used machine learning, the most powerful data analysis tool ever invented, and took their best shot.”

All of the teams failed miserably. Even the best performing research teams performed about as well as a model that just followed simple averages. The sophisticated models were basically useless.

This was a wake-up call: these problems are not just going to be easily solved by more advanced technologies like AI. However, by making predictions and failing, this study provided a catalyst to refine our theories. If the researchers had only fit their models to past data, they might have looked like they had done a great job at explaining what was going on. But by making a forward-looking prediction, the limits of our understanding become clear—which will force us to improve.

Alas, predictions are currently an endangered species in social science. Mark Verhagen of Oxford found predictions in just 12 out of 2,414 (0.4%) articles in the top economics journal; in 4 out of 743 (0.5%) articles in the top political science journal, and in 0 out of 394 articles in the top sociology journal.

With quantum mechanics, physicists don’t fully understand what’s going on, but they can make extremely accurate predictions that have proven extraordinarily useful for solving real-world problems.7 With social science, there’s a risk that many of our models are the worst of both worlds: not being able to fully explain what’s going on and being unable to predict what will and won’t work to solve real-world problems. But with most economics or political science or sociology research, we’re not really interested in the mysteries of the fundamental nature of reality. We should mostly care about what works best to mitigate avoidable harm.

That’s why I favor a paradigm shift in social science toward making more explicit predictions. Most of them will be comically wrong, but by making incorrect predictions, we will iteratively get better at navigating our world—and slowly improve at finally slaying bad Zombie Theories that stubbornly refused to die.

V: Strong Links and Weak Links

Many social phenomena can be sorted into two categories: “strong link problems” and “weak link problems.”

As the always insightful points out, food safety is an example of a weak link problem, in which you have to worry about the weakest link. Even if 99.9 percent of a country’s food supply is free of toxic bacteria, the 0.1 percent can imperil everyone. A rowing crew is also a weak link problem: if seven rowers are Olympians but one scrawny rower is out of sync, the boat will slow to a crawl.

Strong link problems are the opposite: everything will be fine as long as the strongest link is really strong. Basketball, unlike rowing, is a strong link problem. LeBron James is good enough that even if there’s a really weak player on the bench, the Lakers are still going to win a lot. And, as Mastroianni convincingly argues, science is a strong link problem. It’s okay if there’s a lot of junk science out there being published in pseudoscience journals, because the strongest discoveries that change the world are what matter most. Pay attention to the best science, ignore the worst.

I see an exception to Mastroianni’s argument. Zombie Theories in social science short-circuit these dynamics. For the reasons mentioned above, it’s rarely universally agreed what the strongest links actually are in economics, political science, psychology, or sociology. Without being able to kill off the bad but influential theories through falsification, what should be a strong-link problem ends up just being a bit of a mess, with bad ideas lingering on, often obscuring better ones.

Don’t get me wrong: there’s a lot of astonishingly good social science research. I’m often in awe of colleagues across disciplines who have devoted their lives to solving problems in the most innovative ways. My critique is not that social science is useless, but that it could be better.

Yet, in order to stop the madness of our current social trajectory, social scientists need to take these profound epistemological challenges more seriously and develop a laser-like focus on finding better ways to guide policymaking more reliably.8 Sophisticated models with impressive equations, clever research designs, and robust statistical significance make careers, but unless they help us navigate a perilous world and mitigate avoidable harm, what’s the point?

We need good social science now more than ever. The world has never been more complex or dangerous, a tangle of unprecedented instant interconnectivity, laced with at least three existential risks: nuclear weapons, artificial intelligence, and environmental collapse. Every risk is amplified by overconfident authoritarian fools in power. Our capability to destroy the world has outpaced our wisdom to understand and govern ourselves—and that’s why we need every shred of evidence-based sagacity we can muster to forge a more resilient, just world.


Thank you for reading The Garden of Forking Paths. This essay was for everyone, but if you found it interesting or worthwhile and want to support my work, please consider upgrading to a paid subscription for just $4/month. Alternatively, to more deeply explore my ideas about chaos theory, complexity, and the role of chance in our world, buy a copy of FLUKE, which was twice-named a “best book of 2024.”

Subscribe now

Share

1

I am not optimistic that my fellow boffins will take kindly to me after this essay.

2

There’s an entire intellectual saga I’m glossing over here called the demarcation problem, which includes debates between, among others, Kuhn and Popper.

3

The bias toward trying to find a single cause for complex phenomena is one of the flaws in social science that I critique in Fluke, which takes chaos theory seriously.

4

Discerning readers will recognize that I am riffing on terms from philosopher David Chalmers and his division of the consciousness debate into hard and easy problems.

5

The paper basically implies that aleatoric, or irreducible, uncertainty could be an unavoidable feature of at least some important dynamics within social systems.

6

Some of my critiques have to do with the over-emphasis on linear regressions.

7

When I say they don’t understand what’s going on, I mean that the interpretations of quantum mechanics are hotly debated and really unclear, with groups of, for example, Copenhagen interpretation disciples, Many Worlds interpretation disciples, and those who say “Shut up and calculate” because it’s currently impossible to understand.

8

Part of the solution, in my view, is a much greater focus on complexity social science.

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

Why I stopped using AI code editors

1 Share

TL;DR: I chose to make using AI a manual action, because I felt the slow loss of competence over time when I relied on it, and I recommend everyone to be cautious with making AI a key part of their workflow.

In late 2022, I used AI tools for the first time, even before the first version of ChatGPT. In 2023, I started using AI-based tools in my development workflow. Initially, I was super impressed with the capabilities of these LLMs. The fact that I could just copy and paste obscure compiler errors along with the C++ source code, and be told where the error is caused felt like magic.

Once GitHub Copilot started becoming more and more powerful, I started using it more and more. I used various other LLM integrations right in my editor. Using AI was part of my workflow.

In late 2024 I removed all LLM integrations from my code editors. I still use LLMs occasionally and I do think AI can be used in a way that is very beneficial for many programmers. So then why don’t I use AI-powered code editing tools?

Tesla FSD

From 2019 to 2021 I drove a Tesla. Though I would never make the same purchase again, not for political reasons, just because the cars are quite low quality, very overpriced and a hell to repair or maintain.

When I got my Tesla, I started using the Full Self-Driving (FSD) anytime I could. It felt great to just put the car on FSD on the highway and zone out a bit. Switching lanes was as simple as hitting the turn signal, and the car would switch lanes. Driving for me was just getting to the highway, turning on FSD, telling the car to switch lanes every now and then, and listen to music/podcasts while zoning out.

If you drive a car often, you’ll know that when you’re driving on the highway, everything sort of happens automatically. Keeping your car in the lane at the right speed becomes a passive action, it does not require the type of focus that for example reading a book requires, it’s the type of focus that walking requires, it happens in the background of your mind.

In the period from 2019 to 2021 I exclusively drove my Tesla for longer rides. After 2021, I went back to driving regular cars and making this switch was definitely not what I expected. Driving on the highway required my full attention for the first month or so, I had to re-learn keeping the car in the middle of the lane without thinking about it.

Being reliant on Tesla’s FSD took away my own ability to go into autopilot.

My experience with AI code editors

Working with AI-powered code editors was somewhat similar. Initially, I felt that I completed work a lot faster when assisted by AI. The work I was doing most of the time was not super complex, and AI felt like putting my Tesla on FSD, I could just guide the machine to do my work for me.

In my free time, I started working on a side project on my personal account on my work device. On this account, I did not have access to Copilot and my other cool, fancy AI tools. This is when using AI started to feel very similar to my Tesla FSD story.

I felt less competent at doing what was quite basic software development than a year or so before. All of a sudden, it made it very clear to me how reliant I had become on AI tools. Anytime I defined a function, I paused in my editor to wait until the AI tools would write the implementation for me. It took some effort to remember what the syntax was to write unit tests by hand.

With my work, AI started to become less useful over time as well. Not only did it take out the fun for me, but I started to feel a bit insecure about making some implementation decisions myself. Outsourcing the decisions to the AI seemed a lot easier. But sometimes, the AI couldn’t figure things out, even with the best prompts. It was quite clear that because I did not practice the basics often, I was less capable with the harder parts as well.

The loss of Fingerspitzengefühl

Fingerspitzengefühl [ˈfɪŋɐˌʃpɪtsənɡəˌfyːl] is a German term, literally meaning “finger tips feeling” and meaning intuitive flair or instinct, which has been adopted by the English language as a loanword. It describes a great situational awareness, and the ability to respond most appropriately and tactfully. 1

Defining seniority is a very tough thing. Though in my opinion a lot of being a “senior” is in soft-skills, when it comes to the technical hard-skills, a lot comes down to Fingerspitzengefühl. The longer you work with a language, framework or codebase, the more you develop this kind of intuition of what the correct approach is. The gut feeling of “something feels off” slowly turns into a feeling of “this is what we should do”.

This developed intuition is not just on an architectural level. A big component is in the lower level details, when to use pointers (or what type of pointers), whether to use asserts or checks, what to pick from the standard library when multiple options are available (though senior C++ programmers still can’t seem to agree on this).

This intuition is what I was slowly losing when relying on AI tools a lot. And this is coming from a lead developer. When I see a lot of hype about vibe coding, I can’t help but think: how do you exactly expect to vibe code your way to senior? Where will you get the skills from to maintain and extend the vibe-coded codebase when the AI tools are down, or have become too expensive?

Even with larger context windows, more computing power, reasoning models or agents, there will be things that AI won’t be able to do. Over time, the AI tools will be more and more powerful, sure. But when you receive a Slack message that “the website works fine, but the app is down in production; I tried it locally and there it works fine, nothing in Sentry either”, good luck getting an AI agent to fix this for you. Maybe it can, maybe it can’t. And when an AI agent can’t figure it out, will your reply be “sorry, Cursor doesn’t get it, will prompt more tomorrow”?

You can get by without these tools

Sometimes it feels like you have to use AI or be out of a job in 6 months. We’ve been hearing the “3-6 months from now”-story for over two years at this point. I stopped trusting CEO promises about functionality “3-6 months from now” years ago. When I got my Tesla in 2019, I paid €6400 for functionality that was supposed to arrive in “3-6 months from now”, and the functionality is still not present the way it was promised over 5 years ago.

Right now, it is unlikely that letting AI do your coding will work for projects larger than a university project. When working on legacy systems or larger projects in enterprises or when you need to work with and consult a lot of dependency internals (like I do with Unreal Engine), AI tools will often not be able to make things work. When you need to work with internal DSLs, tools or frameworks, good luck getting LLMs to generate useful output. For some industries, you can’t even use AI tools at all for a multitude of reasons.

For some things you really should not want to rely on AI. When implementing authentication systems like JWT2 signing or RBAC3, adding “and it should be secure” to the prompt won’t make it secure if it’s been trained on GitHub code that had CVEs4. When it comes to security, you should be the person who is responsible and understands this fully. Critical systems should be written and reviewed by humans, if we are heading to a situation where one AI agent writes the code, another reviews the autogenerated PR and then another AI agent deploys the code, we will see a huge spike of security issues soon.

Where I draw the line

I still use AI, sometimes. I think it can be a great tool, when used wisely. I draw the line at integration. I keep AI fully separate from my code editor. All of the context, I add manually. I intentionally keep the effort required quite high, so it disincentivizes me.

Examples where I use AI for work include “convert these Go tests in structs to tests in a map”, “convert this calculation to SIMD”, or “when the content type is application/zlib, decode the body”5. I have set up some custom instructions to only give me the code that has changed, and give me instructions for adding it. This way, I am still the one making the changes in the codebase. Just approving a Git diff is not enough, I want to manually add the code myself, only then do I feel confident to sign off on it and take responsibility for it.

Another great use case for AI is learning. I often have questions that are quite uncommon, as I have a few very niche interests. Turns out, adding netcode to a custom game engine using ECS doesn’t have a lot of learning resources. What has worked for me, is asking AI to explain pieces of code, like “explain this assembly code”, “explain what this shader does”, “which books go in-depth about resolving client/server desyncs in game engines”. The AI seems to struggle with these sometimes, I’m getting mixed results, but the results are still much better than search engines. I will even use it for this article, though not for writing content, but for checking6.

Another benefit of using AI this way is the cost. No unnecessary API calls, manually managed contexts and more control over the LLM settings. I use a desktop application with a bunch of different LLMs hooked up to it. I have used it daily for the last 3 months or so, and in total, I have consumed around $4 in credits.

I do want to add that with some things I am more strict. On my personal website, I don’t want any AI-generated content, whether that’s text or images. I don’t like AI generated images or ‘art’ personally for various reasons and I think AI-generated text lacks character, it feels very flat and boring. When something is created by humans, it to me has more value than when it’s created by AI.

Doing what you love

It is also worth noting that there are more things to think about than efficiency and productivity. It’s also about doing what you love. If you love coding, keep doing it yourself, even if a computer might be better at it.

In 1997, Deep Blue won the chess match against the then world chess champion Garry Kasparov7, yet people still play chess. When it comes to programming, I’d say that I program for the same reason that people still play chess8. Though chess and software development are very different, with chess being much more limited in scope, I think it is good to keep in mind that sometimes, we can do things just to enjoy them.

My advice to new programmers

Don’t become a forever junior who lets AI do all their work. If you want to become a programmer, learn to program yourself. Be curious, put in the time and effort to learn how things really work, and how things work in the layer below that. It really pays off. Learning how everything works under the hood and using that is amazing, just keep learning, don’t be a prompt engineer (if you can even call that engineering). Believe me, it’s more fun to be competent9.

Even though AI might be smarter than you, never blindly trust the AI output. Don’t build your whole workflow around it. Sometimes try to work without it for a few days. The better at programming you are, the more AI will get in your way for the more complex work.

If you learn to code now, keep building your skills instead of letting AI do all the heavy lifting, you’ll be capable of fixing the messes that vibe coding is now creating. I don’t want to sound elitist, but if you don’t want to learn to go beyond vibe coding, maybe coding isn’t for you. Because positions where all work can be done by vibe coding are the ones that will be eliminated first when AI becomes more powerful.

And remember: if you cannot code without AI, you cannot code.

Conclusion

When you are using AI, you are sacrificing knowledge for speed. Sometimes it’s worth making this trade-off. Though it is important to remember that even the best athletes in the world are still doing their basic drills for a reason. The same applies to software development: you need to practice the basics, to be able to do the advanced work. You need to keep your axe sharp.

We are still a long way out from AI taking over our jobs. A lot of companies are creating FOMO10 as a sales tactic to get more customers, to show traction to their investors, to get another round of funding, to generate the next model that will definitely revolutionize everything.

AI is a tool, it is not good or bad in itself, it’s what you do with it. I do think it can be a great tool, as long as you are not reliant on it for your workflow. Make sure you can still work effectively without it, make sure you don’t push code to production that you don’t fully understand and don’t think of AI as a replacement for your own thinking. Stay curious, keep learning.


  1. Source: Wikipedia ↩︎

  2. JSON Web Tokens, or JWTs are a common way to generate authentication tokens, among other uses ↩︎

  3. Role-based access control (RBAC) is a mechanism to restrict system access by setting permissions and privileges ↩︎

  4. Common Vulnerabilities and Exposures (CVE) is a program used to identify, define and catalog publicly disclosed cybersecurity vulnerabilities, cve.org ↩︎

  5. The specific contents here don’t matter that much, they are just examples of what I use AI for ↩︎

  6. The prompt I’ve used for this article: “I want you to proofread an article I have written. I want you to give me feedback on incorrect grammar or broken sentences, using UK grammar. Do not comment on sentences that should be broken up or things that could be improved just slightly, only real errors. Do not return modified sentences, but point out where the issue is, under which paragraph, in which sentence and what the mistake is. I will make the required changes myself”. It came back with a few typos, like “form” that should be “from”, “eb” that should be “be”. ↩︎

  7. Source: IBM History ↩︎

  8. Source: Tsoding on YouTube ↩︎

  9. Source: DHH in an interview on YouTube ↩︎

  10. Fear of missing out ↩︎

Read the whole story
mrmarchant
13 hours ago
reply
Share this story
Delete

The Average College Student Is Illiterate

1 Share
Oxford undergraduates on a late night drinking spree, 1824. By Robert Cruikshank. (Photo by Hulton Archive.)

I’m Gen X. I was pretty young when I earned my PhD, so I’ve been a professor for a long time—over 30 years. If you’re not in academia, or it’s been a while since you were in college, you might not know this: the students are not what they used to be. The problem with even talking about this topic at all is the knee-jerk response of, “yeah, just another old man complaining about the kids today, the same way everyone has since Gilgamesh. Shake your fist at the clouds, dude.” So yes, I’m ready to hear that. Go right ahead. Because people need to know.

First, some context. I teach at a regional public university in the United States. Our students are average on just about any dimension you care to name—aspirations, intellect, socio-economic status, physical fitness. They wear hoodies and yoga pants and like Buffalo wings. They listen to Zach Bryan and Taylor Swift. That’s in no way a put-down: I firmly believe that the average citizen deserves a shot at a good education and even more importantly a shot at a good life. All I mean is that our students are representative; they’re neither the bottom of the academic barrel nor the cream off the top.

As with every college we get a range of students, and our best philosophy majors have gone on to earn PhDs or go to law school. We’re also an NCAA Division 2 school and I watched one of our graduates become an All-Pro lineman for the NFL. These are exceptions, and what I say here does not apply to every single student. But what I’m about to describe are the average students at Average State U.

Reading

Most of our students are functionally illiterate. This is not a joke. By “functionally illiterate” I mean “unable to read and comprehend adult novels by people like Barbara Kingsolver, Colson Whitehead, and Richard Powers.” I picked those three authors because they are all recent Pulitzer Prize winners, an objective standard of “serious adult novel.” Furthermore, I’ve read them all and can testify that they are brilliant, captivating writers; we’re not talking about Finnegans Wake here. But at the same time they aren’t YA, romantasy, or Harry Potter either.


Persuasion is also the home of American Purpose, Francis Fukuyama’s blog, and the Bookstack podcast! To receive all of this great content, simply click on “Email preferences” below and make sure you toggle on the relevant buttons.

Email preferences


I’m not saying our students just prefer genre books or graphic novels or whatever. No, our average graduate literally could not read a serious adult novel cover-to-cover and understand what they read. They just couldn’t do it. They don’t have the desire to try, the vocabulary to grasp what they read, and most certainly not the attention span to finish. For them to sit down and try to read a book like The Overstory might as well be me attempting an Iron Man triathlon: much suffering with zero chance of success.

Students are not absolutely illiterate in the sense of being unable to sound out any words whatsoever. Reading bores them, though. They are impatient to get through whatever burden of reading they have to, and move their eyes over the words just to get it done. They’re like me clicking through a mandatory online HR training. Students get exam questions wrong simply because they didn’t even take the time to read the question properly. Reading anything more than a menu is a chore and to be avoided.

They also lie about it. I wrote the textbook for a course I regularly teach. It’s a fairly popular textbook, so I’m assuming it is not terribly written. I did everything I could to make the writing lively and packed with my most engaging examples. The majority of students don’t read it. Oh, they will come to my office hours (occasionally) because they are bombing the course and tell me that they have been doing the reading, but it’s obvious they are lying. The most charitable interpretation is that they looked at some of the words, didn’t understand anything, pretended that counted as reading, and returned to looking at TikTok.

This study says that 65% of college students reported that they skipped buying or renting a textbook because of cost. I believe they didn’t buy the books, but I’m skeptical that cost is the true reason, as opposed to just the excuse they offer. Yes, I know some texts, especially in the sciences, are expensive. However, the books I assign are low-priced. All texts combined for one of my courses is between $35-$100 and they still don’t buy them. Why buy what you aren’t going to read anyway? Just google it.

Subscribe now

Even in upper-division courses that students supposedly take out of genuine interest they won’t read. I’m teaching Existentialism this semester. It is entirely primary texts—Dostoevsky, Kierkegaard, Nietzsche, Camus, Sartre. The reading ranges from accessible but challenging to extremely difficult but we’re making a go of it anyway (looking at you, Being and Nothingness). This is a close textual analysis course. My students come to class without the books, which they probably do not own and definitely did not read.

Writing

Their writing skills are at the 8th-grade level. Spelling is atrocious, grammar is random, and the correct use of apostrophes is cause for celebration. Worse is the resistance to original thought. What I mean is the reflexive submission of the cheapest cliché as novel insight.

Exam question: Describe the attitude of Dostoevsky’s Underground Man towards acting in one’s own self-interest, and how this is connected to his concerns about free will. Are his views self-contradictory?

Student: With the UGM its all about our journey in life, not the destination. He beleives we need to take time to enjoy the little things becuase life is short and you never gonna know what happens. Sometimes he contradicts himself cause sometimes you say one thing but then you think something else later. It’s all relative.

Either that, or it looks like this:

Exam question: Describe the attitude of Dostoevsky’s Underground Man towards acting in one’s own self-interest, and how this is connected to his concerns about free will. Are his views self-contradictory?

Student: Dostoevsky’s Underground Man paradoxically rejects the idea that people always act in their own self-interest, arguing instead that humans often behave irrationally to assert their free will. He criticizes rationalist philosophies like utilitarianism, which he sees as reducing individuals to predictable mechanisms, and insists that people may choose suffering just to prove their autonomy. However, his stance is self-contradictory—while he champions free will, he is paralyzed by inaction and self-loathing, trapped in a cycle of bitterness. Through this, Dostoevsky explores the tension between reason, free will, and self-interest, exposing the complexities of human motivation.

That’s right, ChatGPT. The students cheat. I’ve written about cheating in “Why AI is Destroying Academic Integrity,” so I won’t repeat it here, but the cheating tsunami has definitely changed what assignments I give. I can’t assign papers any more because I’ll just get AI back, and there’s nothing I can do to make it stop. Sadly, not writing exacerbates their illiteracy; writing is a muscle and dedicated writing is a workout for the mind as well as the pen.

What’s changed?

The average student has seen college as basically transactional for as long as I’ve been doing this. They go through the motions and maybe learn something along the way, but it is all in service to the only conception of the good life they can imagine: a job with middle-class wages. I’ve mostly made my peace with that, do my best to give them a taste of the life of the mind, and celebrate the successes.

Things have changed. Ted Gioia describes modern students as checked-out, phone-addicted zombies. Troy Jollimore writes, “I once believed my students and I were in this together, engaged in a shared intellectual pursuit. That faith has been obliterated over the past few semesters.” Faculty have seen a stunning level of disconnection.

What has changed exactly?

  • Chronic absenteeism. As a friend in Sociology put it, “Attendance is a HUGE problem—many just treat class as optional.” Last semester across all sections, my average student missed two weeks of class. Actually it was more than that, since I’m not counting excused absences or students who eventually withdrew. A friend in Mathematics told me, “Students are less respectful of the university experience —attendance, lateness, e-mails to me about nonsense, less sense of responsibility.”

  • Disappearing students. Students routinely just vanish at some point during the semester. They don’t officially drop out or withdraw from the course, they simply quit coming. No email, no notification to anyone in authority about some problem. They just pull an Amelia Earhart. It’s gotten to the point that on the first day of class, especially in lower-division, I tell the students, “Look to your right. Now look to your left. One of you will be gone by the end of the semester. Don’t let it be you.”

  • They can’t sit in a seat for 50 minutes. Students routinely get up during a 50 minute class, sometimes just 15 minutes in, and leave the classroom. I’m supposed to believe that they suddenly, urgently need the toilet, but the reality is that they are going to look at their phones. They know I’ll call them out on it in class, so instead they walk out. I’ve even told them to plan ahead and pee before class, like you tell a small child before a road trip, but it has no effect. They can’t make it an hour without getting their phone fix.

  • It’s the phones, stupid. They are absolutely addicted to their phones. When I go work out at the Campus Rec Center, easily half of the students there are just sitting on the machines scrolling on their phones. I was talking with a retired faculty member at the Rec this morning who works out all the time. He said he has done six sets waiting for a student to put down their phone and get off the machine he wanted. The students can’t get off their phones for an hour to do a voluntary activity they chose for fun. Sometimes I’m amazed they ever leave their goon caves at all.

I don’t blame K-12 teachers. This is not an educational system problem, this is a societal problem. What am I supposed to do? Keep standards high and fail them all? That’s not an option for untenured faculty who would like to keep their jobs. I’m a tenured full professor. I could probably get away with that for a while, but sooner or later the Dean’s going to bring me in for a sit-down. Plus, if we flunk out half the student body and drive the university into bankruptcy, all we’re doing is depriving the good students of an education.

We’re told to meet the students where they are, flip the classroom, use multimedia, just be more entertaining, get better. As if rearranging the deck chairs just the right way will stop the Titanic from going down. As if it is somehow the fault of the faculty. It’s not our fault. We’re doing the best we can with what we’ve been given.

All this might sound like an angry rant. I’m not angry, though, not at all. I’m just sad. One thing all faculty have to learn is that the students are not us. We can’t expect them all to burn with the sacred fire we have for our disciplines, to see philosophy, psychology, math, physics, sociology, or economics as the divine light of reason in a world of shadow. Our job is to kindle that flame, and we’re trying to get that spark to catch, but it is getting harder and harder and we don’t know what to do.

Hilarius Bookbinder is the pseudonym for a tenured professor with an Ivy League PhD who writes Scriptorium Philosophia.

A version of this essay originally appeared in Scriptorium Philosophia.


Follow Persuasion on X, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.

And, to receive pieces like this in your inbox and support our work, subscribe below:

Subscribe now

Read the whole story
mrmarchant
14 hours ago
reply
Share this story
Delete

The Ordinary Sacred

1 Share

A Philosophy of Happiness Through the Uncurated Life

The Ordinary Sacred

In 1953, Ernest Dichter—the father of motivational research—wrote that the American consumer was no longer purchasing soap to clean themselves, but to feel clean.

Advertising wasn’t selling products.

It was selling identity.

A bar of soap promised not just hygiene but moral worth.

Fast-forward seventy years, and you can trace a straight, greasy line from Dichter to Instagram meal-prep influencers, YouTubers waxing poetic about minimalism from $4,000 Herman Miller chairs, and Twitter productivity gurus who wake up at 4:30 a.m. to drink bulletproof coffee and document their sense of superiority.

We didn’t stop at soap. That instinct—to purchase meaning, to wear values like accessories—expanded until it swallowed almost everything. Now, even our meals, hobbies, outfits, and downtime are curated to project identity. Underneath it all is the same pitch Dichter uncovered: you are what you consume. Only now, it’s not just what you buy—it’s how you present it. Life itself is packaged, stylized, and sold back to the self.

Jean Baudrillard summed it up:

“We are at the point where consumption is laying hold of the whole of life, where all activities are sequenced in the same combinatorial mode, where the course of satisfaction is outlined in advance, hour by hour, where the ‘environment’ is total—fully air-conditioned, organized, culturalized. The beneficiary of the consumer miracle also sets in place a whole array of sham objects, of characteristic signs of happiness, and then waits (wars desperately a moralist would say) for that happiness to alight.”

We’re miserable.

Aren’t we?

Lonelier, more anxious, more frustrated, and more exhausted than ever.

Not because we lack comfort or tools or access—but because we’ve staged too much of ourselves.

We’ve turned living into editing. When every bite is a performance, every outfit a brand decision, every hobby a pitch, there’s no space left for boredom. Or rest. Or actual pleasure. We scroll through each other’s highlight reels while quietly assembling our own, haunted by the suspicion that everyone else is doing it better—and forgetting to live in any of it.

I crashed hard after the pandemic. It wasn't cinematic, and it wasn't a breakthrough. It was dull. Exhausting. Slow. It was a kind of rot, spreading inward. At first, I thought I was just tired. Overstimulated, under-rested, another victim of collective burnout. But it got worse. My routines—ones I’d once clung to like scaffolding—started to feel grotesque. Wake up, hydrate, gratitude journal, morning sunlight, stretch, optimize. For what? I couldn’t answer. I didn’t even want to ask anymore. I’d spent years trying to build a life that looked good on paper, sounded smart in conversation, and played well on social media. I had all the systems, all the trackers, all the polished, adult routines. I was a walking Notion template. And...I was hollowed out. I wasn’t living—I was managing myself like a brand asset. I didn’t know how to stop.

What broke me wasn’t a single moment. It was the accumulation of hundreds of tiny ones that didn’t feel like living. Eating meals I didn’t enjoy but felt obligated to consume because they were “clean.” Turning walks into podcasts into productivity. Posting through loneliness and calling it community. Trying to be impressive while privately falling apart. I felt like a parody of myself—curated, competent, and completely numb.

There’s a Camus quote I keep coming back to:

“At any street corner, the feeling of absurdity can strike any man in the face. As it is, in its distressing nudity, in its light without effulgence, it is elusive. But that very difficulty deserves reflection. It is probably true that a man remains forever unknown to us and that there is in him something irreducible that escapes us.”

Absurdity is precisely the word.

One night, I just sat on the floor and thought: none of this is working. None of this is helping. I’ve done everything modern culture told me to do to be happy, successful, fulfilled. I was tracking everything but feeling nothing. That was the moment something cracked open in a quiet, defeated realization: I don’t want to live like this. I don’t want my life to be a performance. I don’t want to be optimized. I just want to feel human again. I want to be messy. Boring. Unimpressive. Real.

In the months, years since the pandemic's peak, I've been unable to reconcile the cognitive dissonance. Seeing the inauthenticity and performance of modern happiness has made it impossible to achieve happiness through the same means. There's a falseness to it all, a sense of how fragile the facade actually is.

After the collapse, after the burnout, after the creeping dread that none of the things I’d been told to care about were making me feel human, I started noticing what actually felt good. Not "aspirational" good. Not "productive" good. Just good. A grilled cheese sandwich eaten in the sun. A day without notifications. Saying no and not explaining. I didn’t see it as a philosophy. I just knew I felt less fake. Less hollow. Less like I was performing a version of myself I couldn’t stand anymore. Over time, I started tracing a pattern. What if I stopped managing my life like a brand? What if I let it be messy, private, low-stakes? What if that was enough?

The Ordinary Sacred: A Philosophy of Uncurated Life

The Ordinary Sacred is my idea for a philosophy of presence without spectacle. A life without audience. A refusal to curate the self into something consumable. It honors sufficiency over scale, texture over narrative, and experience over optics. It says: the real, unpolished, unposted life is enough—and always was.

It names the thing without dressing it up. It doesn’t try to win attention. It doesn’t demand belief. It simply offers a shift—a posture of attention, a refusal to perform, a commitment to being here. And if it sounds pretentious, hell - you should see the rest of my blog.

The Ordinary Sacred turns directly into the mundane. This is where meaning lives. In unremarkable afternoons. In laughter that goes unrecorded. In friendships that don’t need captions. In lives that never go viral. In meals that don’t “flush toxins” or whatever absurd promise is making the wellness rounds this week.

This is the closest I’ve come to a working, personal theory of happiness. Not the performative kind. Not the kind with a morning routine and a five-year plan. Not the kind you can monetize, or coach, or convert into bullet points or turn into a cult. The kind you feel in your teeth when you bite into something good, or in your chest when you laugh too hard, or in your shoulders when you realize—for the first time all day—that you’re not clenching.

It starts with a full-body, bone-deep refusal. To stop turning your life into content. To stop polishing your edges so you can be easier to consume. To stop translating every moment into something brand-safe, clever, legible. To stop acting like joy only counts if it comes with a graph and performs well in metrics.

We live under systems—economic, cultural, digital—that demand we strive to be impressive. Inspirational. Aspirational. Permanently visible. Permanently performing. Eternally, achingly unsatisfied. We’re trained to ask, before doing anything: Will this make good content? Will this signal something useful? Will this get me closer to who I’m “supposed” to be?

The Ordinary Sacred says: fuck all that. Be ordinary. Be quiet. Be offline. Do things because they feel good, or because they’re funny, or because they’re yours. Not because they’ll look good later. What's most valuable isn't what's exceptional, curated, or performative, but rather what's common, authentic, and directly experienced.

It’s not anti-tech, or anti-ambition, or a cry to return to the earth from whence, etc. You can still use GPS. You can still have a favorite brand of headphones. You don’t have to churn butter or reject civilization or free float into the void. You just have to stop selling yourself to yourself.

It’s lo-fi, low-stakes, non-viral. It’s fully alive. It gives you permission to breathe. To be boring. To be happy in a way that no one else gets to gatekeep or approve.

TLDR: The Ordinary Sacred:

There are no habits, wellness tips or life hacks here. Just solid, practical and deeply personal decisions in a culture that pushes all of us to perform.

They’re not the whole picture.

Only the first four that made sense.

  1. Eat for yourself, not for anyone else — Real food, real joy.
  2. Work to pay your bills, not to validate your worth — Labor is not identity.
  3. Buy things, not signals — Use and want over performance.
  4. Live your own life, not for your feed — Escape is freedom.

Eat For Yourself

Wellness influencers parade turmeric like it’s a gift from an alien super being. Kale is fetishized. Everything is raw, anti-oxidized, adaptogenic. Food is now a purification ritual for people afraid of living in their bodies. It’s asceticism disguised as luxury. We pretend a spirulina bowl is satisfying. It’s fucking not. It tastes like damp grass and self-denial.

Food has become a moral minefield. We talk about "clean eating" like impurity is contagious. We moralize portions. We collapse nutrition into discipline, and discipline into righteousness. It’s no longer just a question of what you eat—it’s a referendum on who you are. Hungry? Control it. Ate something with cheese? Better confess. Craving carbs? Repent.

We count calories like sins. We call dessert a "guilty pleasure." We look at our plates and think about punishment. All of it delivered in a glossy wrapper of faux empowerment—"wellness" that is just diet culture with a better font.

And this grift hits hardest where it always does: at the intersection of class, image, and shame. The wellness industry sells restriction as virtue, but only if it comes in the right package. Smoothie bowls cost $14. Alkaline water is aspirational. Fasting is chic if you’re thin and rich, disordered if you’re poor and fat. We have created an entire language of virtue and failure out of what people put in their mouths. And we judge. Constantly.

It's not health. It's a caste system built on quinoa and quiet cruelty.

Food is not fuel. It’s food. It’s greasy, salty, sweet, hot, cold, cheap, satisfying. It’s a double cheeseburger from a place with flickering lights. It’s fries at midnight, cereal for dinner, gas station snacks on a road trip. It’s eating something because it smells good, not because it fits your macros or matches your aesthetic. It doesn’t need to be “activated.” It doesn't need to be spiritualized. It doesn't need to be posted.

Eat the thing. Eat it hot, with your hands, juice on your chin. Don’t wait for the right angle. Don’t explain it. Don’t count it. Taste it.

And when you taste it—really taste it—you’ll understand why Brillat-Savarin, the French epicure and proto-food philosopher, opened his Physiology of Taste with this line: “Tell me what you eat, and I will tell you what you are.” He wasn’t moralizing. He was celebrating. He argued that pleasure from food was not only real, but necessary: “The pleasures of the table belong to all ages, to all conditions, to all countries, and to all areas; they mix with all other pleasures and remain at last to console us for their loss.”

Food isn’t a weakness. It’s a foundation. It outlives beauty, relevance, even sex. It is the last pleasure to go. And still, we punish ourselves. We eat alone and scroll silently past staged plates that make us feel worse. We are told to crave less, shrink more, fast longer.

And we carry that punishment into our bodies. The obsession with weight has become its own religion—worship of thinness masquerading as discipline, devotion measured in deprivation. Your worth reduced to numbers: grams, inches, pounds. Your character tied to how well you say no to hunger. This isn’t health. This is control.

Eat with others. Eat things that drip. Let food be messy and close and real. As M.F.K. Fisher wrote in 1943, during wartime rationing no less, “Sharing food with another human being is an intimate act that should not be indulged in lightly.” That intimacy—the kind born from greasy hands passing a burger across a table—is more nourishing than any supplement, any powder, any smug, made-up-toxin-free pile of performative buckwheat.

Seneca, even in his Stoicism, admitted this: “We should look for someone to eat and drink with before looking for something to eat and drink.” Eating alone isn’t just lonely. It’s unnatural.

A cheeseburger—greasy, simple, immediate—is not a compromise. It’s an honest answer. You don’t have to justify it. You just have to chew.

Simple happiness honors appetite. It mocks dietary performance art. It sees virtue signaling in quinoa bowls and says: you are allowed to want salt. You are allowed to want fat. You are allowed to be full. You are allowed to be more than full. You are allowed to enjoy yourself and your self.

The world doesn’t need more photographed smoothie bowls. It needs more loud laughter over shared fries, more sauce on shirts, more meals that begin with hunger and end in satisfaction, not shame. That’s living. That’s the ethic.

This—this mess, this unfiltered joy—is part of what The Ordinary Sacred defends. A return to trust. In our bodies. In our appetites. What we eat is not a performance. It’s a reunion with our selves, with each other, with the parts of life we’ve been taught to edit out.

Work to Pay Your Bills

The modern economy is a pantomime of purpose. "Do what you love," we are told. As if love is scalable. As if passion is billable. As if your deepest sense of meaning should report to a quarterly KPI.

It sounds harmless, even inspiring, until you realize it’s a trap. Once you collapse identity into labor, every moment becomes a performance review. You’re not just working. You’re branding. You’re optimizing. You’re pretending that the thing you’re doing for rent is your soul’s calling. And if it isn’t? Then you’ve failed—not just professionally, but existentially.

There’s something uniquely exhausting about being alive in an era where your job description is expected to double as your identity. You’re not a coder—you’re a Ninja, a Guru, an Evangelist. The Ordinary Sacred looks at all that and laughs. No one needs your LinkedIn headline to be poetic. Just make rent. Go home. Be a person.

Work is work. You show up. You do the thing. You clock out. That doesn’t make you lazy. That makes you sane.

And yes, there’s dignity in labor. But there’s also dignity in limits. Working more doesn’t make you more if it doesn’t make you more money to support your family, to live your life. You can be brilliant and not be busy. You can be wise and take naps. You can be proud, virtuous (if that's your schtick), and completely unscalable.

There's comfort here in Aristotle. In Politics, he writes, “the end of labor is to gain leisure,” and defines leisure not as rest from exhaustion, but as the space in which real life begins. It’s attention. It’s spaciousness. It’s where we cultivate thought, relationships, and the kinds of things you can’t track on a company dashboard. But it’s also binge-watching a show until your brain goes blank because, fuck it, that’s what you want to do, sitting on the couch at the end of the day with the person you love more than anyone else in the whole damn world and cycling through episodes of Star Trek. Because that’s where the happiness is.

Seneca picked up that thread centuries after Aristotle. In his essay On the Shortness of Life, he observed, “It is not that we have a short time to live, but that we waste a lot of it… Life is long if you know how to use it.” He was writing to Romans who were overcommitted, overextended, and distracted by ambition. Sound familiar? His solution wasn’t to do more. It was to stop mistaking busyness for worth.

In his 1932 essay In Praise of Idleness, Bertrand Russell argued that the modern worship of work had become a sickness. “The morality of work,” he wrote, “is the morality of slaves, and the modern world has no need of slavery.” What he proposed wasn’t laziness—it was balance. He imagined a world where people worked less not because they lacked ambition, but because they had other things worth doing: reading, thinking, walking, being. His version of a good life wasn’t crammed with productivity. It was spacious enough for thought, curiosity, and actual rest. Work wasn’t the point. Living was.

But here we are—half-human, half-product, refreshing inboxes and waiting for the dopamine of a Slack ping. Watching people post hustle reels while secretly googling burnout symptoms.

Stop making your job your personality. Pay your bills. Turn off your phone. Leave the spreadsheet in the cloud and go make dinner. You are not a commodity. You are not your engagement metrics. You are not your title or your side hustle or your carefully sharpened personal brand.

You are a person. Work enough to sustain that. Then stop.

Buy Things, Not Signals

There are whole industries now that don’t sell products—they sell proof. Proof that you’re tasteful. Proof that you’re refined. Proof that your life is intentional, curated, and on trend. There are couches you can’t nap on, plates you don’t eat off, clothes you only wear to signal restraint. These things don’t serve life. They stage it.

We’ve convinced ourselves that taste is the same thing as virtue. That a beautiful apartment is a moral accomplishment. That buying the right lamp means you’ve achieved some kind of internal alignment. But what’s really happening is this: we’re buying our own dissatisfaction. We’re furnishing for the feed. We’re building rooms we don’t relax in. Can’t relax in. Can’t even live in.

Curation culture is a profound narrowing. It's conformity with better lighting. You are allowed to be unique, but only in ways the algorithm understands: neutral tones, soft textures, minimalist aesthetics with maximum expense. Your life becomes a set. And you? A prop.

The Ordinary Sacred doesn’t care what your living room looks like on camera. It doesn’t care if you have matching decanters or poured concrete bookends. It asks: is your life usable? Do your things serve you, or do you serve them?

This tension isn’t new. The Stoics wrote about it constantly—not to glorify suffering, but to keep the focus where it belonged. Musonius Rufus, in his Lectures, writes:

“We must not admire those who own great possessions, but those who have the strength to do without them. For it is not he who has little, but he who desires more, that is poor. The man who is not in need is not the one who has much, but the one who can go without much.”

That’s not minimalism-as-aesthetic. That’s minimalism as escape from the mental leash of consumer anxiety. From the constant itch of want. From the cultural myth that more—prettier, sleeker, more tasteful—is the same as better.

Socrates, when walking through the Athenian marketplace, is reported to have said, “How many things there are in this world of which I have no need.” Imagine saying that in a shopping mall. Imagine feeling it in your bones while scrolling through influencer home tours. It’s not asceticism. It’s relief.

Aim lower. Buy what works. Buy what brings you real joy, not curated aspiration. You don’t need a capsule wardrobe. You need pants that don’t dig into your ribs when you sit. You don’t need artisanal Japanese storage boxes. You need to be able to find your damn keys.

This is not an attack on beauty. Beauty matters. But performance is a parasite that feeds on it. When the primary function of an object becomes its optics, it’s already dead. A beautiful object that demands anxiety isn’t beautiful. It’s a burden.

There is no moral prize for restraint. No award for matching the algorithm’s idea of dignity. Buy dumb stuff. Buy ugly mugs that feel good in your hand. Buy cozy blankets in loud colors. Buy the thing that reminds you of your childhood, even if it’s kitsch. Let your home look like you actually live in it.

Do it for you. Not for the performance.

Live Your Own Life

This is the big one. This is the hardest one. Because even if you eat what you want, and work to live, and buy things that serve you, there’s still the temptation and the judgement of the performance. The hum. The constant background noise of the self being watched.

And worse: the constant comparison.

You check your reflection in the café window. But you’re not really looking at yourself. You’re comparing that reflection to someone else’s post. Someone else’s morning. Someone else’s outfit, hair, discipline, joy. You half-laugh and wonder how it would’ve looked on camera. You draft the caption for the moment before the moment has even ended. You narrate your own life in real time, for an imaginary audience who may or may not ever see it. Every good day is content-in-waiting. Every bad day is a potential comeback arc. We have internalized the lens. And through that lens, we are constantly measuring.

Their kitchen is cleaner. Their life is quieter. Their goals are clearer. Their grief is more graceful. Their version of authenticity looks better than yours feels.

Social media didn’t create this instinct—it just strapped a monetization engine to it. Humans have always sought approval. But now we seek it in obsessively, constantly, publicly, and with analytics. Everything becomes a pitch. Even our sincerity starts to feel rehearsed. Erich From, in The Sane Society, warned of the rise of what he called the “marketing orientation,” where a person’s value is determined by their ability to sell themselves. “The self is experienced as a commodity,” he wrote, “whose value and meaning are extrinsic to the self and lie in its exchangeability.”

And what is Instagram but an exchange market for identity? What is TikTok but a derivatives market for personality? We are priced by our virality, our reach, our relevance. Every scroll becomes a series of self interrogations: Who’s doing life better? Who’s further along? Who’s winning?

Get out. Get the fuck out. Not forever. Not in dramatic fashion. Just enough to find your pulse again. Just long enough to remember that you were never supposed to be measured against everyone else’s highlight reel.

Go for a walk and don’t track your steps. Sit in a room without documenting the lighting. Tell a joke that doesn’t get recorded, doesn’t get retweets, doesn’t get remembered by anyone but the person you told it to. Laugh in a way that’s too loud. Be unflattering. Be unreadable. Wear the weird shirt. Take the ugly photo and don’t post it.

I’m not flogging the dead-horse of "authenticity." That word has been bled dry by marketers. Everything is "authentic" now—packaged transparency, curated vulnerability, aesthetic relatability. What we’re talking about is something rarer: being invisible. Not erased. Not ashamed. Just free from the eyes that don’t matter.

There is a kind of peace in the unrecorded life. A soft, slow quiet that doesn’t ask you to perform. Montaigne understood this long before the internet existed. "I want to be seen as I am," he wrote, "neither better nor worse." And yet, he also kept most of his thoughts private. “The greatest thing in the world is to know how to belong to oneself.”

Be unrecorded. Be untagged. Be.

Marcus Aurelius, writing to himself in Meditations, saw the trap of reputation clearly. “Waste no more time arguing what a good man should be. Be one.” Be. And later: “Do not waste what remains of your life in speculating about your neighbors... how he did this or that, or what he said, or thought, or schemed. Look instead to what you have to do yourself.”

And Epictetus was even blunter: “If you want to improve, be content to be thought foolish and stupid.” It’s the opposite of branding. It’s the rejection of legibility. Improvement, he suggests, is internal. The rest is noise.

In Being and Time, Heidegger argued that modern life alienates us from authentic existence by pushing us into the “they-self”—a mode of being where we exist primarily through the eyes of others. We become anonymous, absorbed in what "they" do, think, expect. "Everyone is the other, and no one is himself," he writes. Social media has weaponized the they-self. And we volunteered.

You can undo that. Not with slogans. With practice. With attention. You can ask: who are you when you’re not being watched? Who are you when nothing is being documented? Who are you without the mirror of other people’s reactions?

You do not need to be legible. You do not need to be a narrative. You do not need to be consumed. You need to be present.

Attention is sacred. Give it to your life—not to the algorithm. And for the love of your peace: stop measuring. You were never supposed to be someone else’s content. And they were never meant to be yours.

The Counterarguments

Isn’t this just another aesthetic? Isn’t this just a different kind of performance—anti-gloss as gloss, curated messiness, low-res virtue signaling? Isn’t it just normcore for burnout millennials?

That’s the risk. Absolutely. Every idea can be flattened into content, ironized into branding, merchandised into a lifestyle product with a logo and a tagline. But that doesn’t mean the idea is empty. It means it’s vulnerable. It means you have to guard it.

In The Society of the Spectacle, Guy Debord warned that “everything that was directly lived has moved away into a representation.” Experience becomes image. Life becomes theatre. Even revolt becomes performance. “The spectacle,” he wrote, “is not a collection of images; it is a social relation among people, mediated by images.”

Which means that anything—including this—can be eaten by the spectacle and turned into another flavor of capitalism. That doesn’t make it meaningless. And it doesn't make it any less precious.

Living the Ordinary Sacred doesn’t mean you disappear. It doesn’t mean you never post again, or live in a cabin without Wi-Fi, or become the annoying guy at parties who lectures people about Instagram. It means you post less, and later, and without expectation. You don’t stage your joy. You don’t curate your dinner. You don’t turn every genuine thing into a teaser for your personality.

You let some things remain yours. Not in the sense of ownership. In the sense of privacy. In the sense that no one else gets to consume or be consumed by them.

Montaigne knew it, even in the 16th century. He wrote, “A man must keep a little back-shop, all his own, wherein to be himself, without reserve.” Not everything goes in the window. Not everything is for show. “If we have been able to live well and think well,” he continued, “we have lived enough.”

You are allowed to have a self that isn’t publicly vetted. You are allowed to experience a moment that doesn’t become story. You are allowed to live outside the logic of performance.

The Ordinary Sacred is a practice. And like any practice worth doing, it requires restraint. You resist the urge to narrate, to beautify, to stage. Not because those things are evil, but because they’re constant—and you need space to hear yourself think.

You can still enjoy the internet. You can still have fun online. But the minute your real life starts feeling like a draft folder for future posts, something is broken. The Ordinary Sacred is the attempt to stop that collapse. To create space between the event and the upload. To protect the sacred ordinary from the economy of spectacle.

This isn’t countercultural for the sake of it. It’s countercultural because the culture is sick. You don’t need to reject technology. You need to reject the demand to turn your soul into a feed.

Keep something for yourself. Leave something unposted. Let your life be bigger than your screen.

I'm not claiming this is a new idea. It draws from a lineage of thinkers and traditions that insisted the sacred was never elsewhere. It was always in the here and now—if you knew how to look.

Zen Buddhism and Stoicism give us the clearest gesture: chop wood, carry water. Enlightenment isn’t found in escape or spectacle. It’s found in full presence within the ordinary. The practice is the life. The life is the practice.

Phenomenology, especially in the work of Merleau-Ponty, roots all meaning in lived experience. There is no truth apart from the body, from perception, from being-in-the-world. The ordinary isn’t something to transcend. It’s the only ground we’ve ever had.

The Transcendentalists—Emerson, Thoreau, Fuller—understood that what’s called "divine" doesn’t live above us. It lives around us. Underfoot. In trees and rivers and silence and solitude. Thoreau wrote: “I went to the woods because I wished to live deliberately… and see if I could not learn what it had to teach.” He didn’t find truth in institutions. He found it in the texture of everyday life.

Thomas Moore, in Care of the Soul, wrote about tending to the world, not overcoming it. He reframed the sacred as something found in dishes, conversations, melancholy, repetition. Not transcendence. Depth.

Annie Dillard, in Pilgrim at Tinker Creek, knew a kind of fierce noticing. She found the infinite in the specific. The sacred in the granular. “How we spend our days is, of course, how we spend our lives,” she wrote. No spectacle. Just close, attentive being.

Where This Leaves Me

I got here the long way around. Through collapse. Through burnout. Through the slow erosion of energy, interest, and self. Through all the self-improvement and content and aesthetics and hustle and intention and systems and routines—until one day I just couldn’t do it anymore.

I kept looking for something that would make my life feel like mine. Something to click. Something to unlock the sense that I was really in it. Instead, I just got more polished. More watchable. More brandable. More tired.

I thought quitting would look like failure. But it didn’t. It looked like sitting on the floor and breathing. It looked like making scrambled eggs without narrating it. It looked like walking without tracking. It looked like being a person instead of a project.

There was no revelation. Just the quiet repetition of No.

No, I’m not posting that. No, I don’t care if this aligns with my "voice." No, I'm not tracking this. No, I’m not optimizing this meal, this moment, this body, this day.

And then—without realizing it—I was living differently. Still online. Still in the world. Just… off the feed. Out of the frame. Inside something I hadn’t felt in a long time: my own life.

That’s what The Ordinary Sacred is. You can't buy it. You can't sell it. It’s not an app. It’s not a coaching program. It’s not a proprietary system.

It’s just a way to live without submitting your life for approval.

It’s the dignity of being a person.

🍕
My goal this year is to make Westenberg and my news site, The Index, my full-time job. The pendulum has swung pretty far back against progressive writers, particularly trans creators, but I'm not going anywhere.

I'm trying to write as much as I can to balance out a world on fire. Your subscription directly supports permissionless publishing and helps create a sustainable model for writing and journalism that answers to readers, not advertisers or gatekeepers.

Please consider signing up for a paid monthly or annual membership to support my writing and independent/sovereign publishing.


Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

Boolean Clashes: Discretionary Decision Making in AI-Driven Recruiting

1 Share

As artificial intelligence (AI) systems increasingly mediate our social world, regulators rush to protect citizens from potential AI harms. Many AI regulations focus on assessing potentially biased outcomes of AI. But AI systems are always embedded into social contexts and decision-making processes that are typically distributed across a range of human and machine agents. Bias and discrimination can occur anywhere in this human-machine network. Only focusing on potentially biased outcomes of an AI system will not fix the bias and discrimination problems that are integral to the whole human-machine network. Addressing this issue means focusing AI accountability approaches on practices and processes, rather than just machines or just humans.

Let’s take the world of recruiting as a case study. Recruiting has become a frontier of AI-driven automation. AI recruiting tools support search for candidates on job platforms, candidate screening (such as video interviewing or technical interviews to test coding skills), crafting job descriptions, and integrating AI (for example, chatbots) into applicant tracking systems. Using these tools can also produce instances of discrimination. Infamous examples include Amazon’s sexist hiring AI1 and Facebook’s ageist and gendered job ads.2 Fueled by the COVID-19 pandemic and demand for remote recruiter-candidate interaction, the human resources (HR) tech market is large, and it continues to grow (projected to $39.90 billion by 2029).4

Even as particularly problematic tools are retired, issues of technology-mediated and AI-accelerated bias and discrimination persist. AI tools used in candidate assessments (such as interviews or tests) are prone to error, often disadvantage certain populations6 or are based on pseudo-scientific constructs.7 Regulators are paying heightened attention to the use of AI in recruiting and employment, with influential regulation focusing explicitly on AI in HR as a “high risk” area of AI deployment.3

But discrimination that persists in HR cannot be attributed solely to the AI. It is the result of a complex sociotechnical system that includes both AI and the many people engaged in HR processes and practices. In recruiting alone, this includes sourcing specialists, talent acquisition managers, recruiters, hiring managers, HR administrators, and others, who interact with and potentially make decisions about candidates. Various AI systems and other technologies are spread across that network of actors. What is needed to mitigate potential discrimination and harm is a closer look at the professional practice of recruiting, how recruiting professionals use and make sense of AI systems, and how this affects their discretionary decision making.

Keeping It Old School: The Persistence of Boolean Search

In low-volume recruiting (that is, recruiting from a scarce talent pool so finding candidates is hard), recruiters’ traditional professional practice revolves around Boolean search. When searching for talent in databases, they assemble the specifications of the job into what they anticipate will be a powerful Boolean string. A professionally crafted Boolean search string designed to locate a computer programmer who has experience with a particular group of programs and possesses leadership skills might appear as shown in Figure 1.

Figure 1.  Real-world Boolean string for sourcing shared by research participant.

The Boolean search method is grounded in binary logic with a simple premise that statements can only be true or false. It has transcended its mathematical origins to become a cornerstone of information retrieval writ large (as anyone who has been taught to use a library catalog knows). Boolean search allows users to express the relationship between keywords in a search, rather than just the presence of the keywords (see Figure 2). Key for this are the three operators AND, OR, and NOT. Using the AND operator narrows the search by including only the results that contain all the specified keywords. The OR operator broadens the search to include results that contain either of the chosen keywords. The NOT operator excludes results that contain the keyword following it.

Figure 2.  Basic Boolean logic (source: Jakub T. Jankiewicz, Wikimedia Commons).

Boolean logic operates within recruiters’ minds as they carefully select the most fitting keywords for the role they are trying to fill. Working to match job specifications with ideal candidates, they turn to Boolean search across vast candidate databases (such as LinkedIn) into an epistemology—a way of knowing and understanding the world of potential hires.8

Constructing a Boolean search string for finding fitting job candidates is not merely a technical exercise. It is a labor-intensive and iterative process that demands creativity, analytical rigor, and often years of experience. Typically, recruiters invested considerable time and effort in iteratively refining their Boolean search strings. Each keyword selection, operator placement, and logical structure acts as a deliberate choice, designed to surface the “right” candidate profiles. This process transcends mere keyword lists; it demands an iterative dance between logic and intuition, honed through experience and a deep knowledge of the target talent pool. For a recruiter, finding the “perfect” Boolean string is like finding a vein of gold. It can vastly improve efficiency and efficacy of candidate search for a specific role, or type of role. Boolean search allows recruiters to iteratively adapt their queries in real-time based on the feedback provided through the search engine results. This is where recruiters can exercise the discretionary decision-making power that is the essence for their own job: making the decision on who is the “right” candidate.

In traditional, non-AI-driven information retrieval by way of Boolean search, recruiters can easily discern the relationship between the keywords expressed in the Boolean string and the search results. This gives them discretionary room to maneuver. They can rely on the search engine faithfully delivering to their query, and they can predict the effects of tweaks in their Boolean expression. In other words, traditional Boolean search provides the kind of transparency recruiters require for the discretionary decisions that are specific to their profession.

AI-Driven Search and Boolean Epistemology

In AI-driven candidate search, however, the system is not faithfully delivering on the keyword relationship expressed in the Boolean string. AI-systems, including generative AI systems that respond to prompts, are calibrated to produce statically probable outputs based on a search or prompt (as well as various unknown factors, such as previous search behavior), rather than the precise keyword relationship. Here, the system interprets the keywords in ways that are not discernable (and therefore actionable) by recruiters. For example, a recruiter may include the term “New York City” in the string because they need a candidate who is based in New York City for tax reasons. The AI may interpret this in undesirable ways and, for example, suggest candidates as “most relevant” (and ranked at the top of the search results) who are based in Hoboken, NJ, USA. A new search with the exact same Boolean expression run a few hours later may show candidates based in the Hudson Valley, NY, USA.

It remains unclear to the recruiter, how and why the AI system powering the search made this leap. The Boolean epistemology that recruiters traditionally deploy affects how they make sense of AI and influences if and where mistrust and potential bias manifest in HR. The “interpretive lift” undertaken by the AI system is palpable but never consistent or squarable with the professional epistemology recruiters use, and it curbs the discretionary decision space available to them. They cannot tweak the keywords to better understand the effects of each one on the search result. In AI-driven search, these causal effects cannot be known by recruiters. In other words: Boolean epistemology and AI epistemology clash.

Navigating the Chasm

The clash between these two epistemologies leads recruiters to mistrust the AI systems that their own employers often require them to use. Recruiters are acutely aware of the “epistemological clash.” They know that, through the machine learning feedback loop (in which data generated through the interaction with an AI system re-enters the system and affects its predictions), their interactions with AI-driven search engines are recorded and affect subsequent search results. To preserve their discretionary decision-space (which is central to their professional identity), recruiters sometimes try to “neutralize” the AI-driven search system or “confuse” it. They may input a vast range of different Boolean search strings, save all results in a separate spreadsheet, and manually comb through them. They may also manually infer features they deem important rather than relying on the machine to do it. For example, they may infer gender or racial identity from location or educational background to try to ensure a diverse candidate pool and avoid bias and discrimination.

Viewed as a larger socio-technical work system, recruiters’ interactions with AI-driven search tools reclaim discretionary capacity and allow them, not machines, to make decisions about candidates. This involves substantial work as Boolean searches must be meticulously composed and continuously tweaked, which reduces the alleged time-saving value of AI systems. It also demonstrates how AI-driven recruiting systems may be used in ways that sustain, rather than curb, issues of (human) bias and discrimination.

Thus, it is insufficient to address AI discrimination by looking at the potentially biased outcomes of an AI system. A more nuanced approach is needed as the field of AI ethics and accountability and transparency progresses, and as AI regulation becomes more common. This becomes particularly important as generative AI systems enter the HR space making the AI’s interpretation of search commands or prompts even less transparent and adding the risk of “hallucinations.”

Understanding how professional discretion is affected by new forms of AI-driven automation, within and beyond HR, is extremely important. We must treat the black box of AI as a socio-technical phenomenon in which professional epistemologies and practices clash with hidden AI functionalities. Concretely, this means integrating work practices and decision-making processes into AI accountability efforts. Only by taking this larger systems view can we avoid the “many hands” problem that makes it so hard to identify who is responsible for the harms that computer systems can cause.5 Centering what people are doing and how—including with machines—rather than treating machines as the sole focus of regulatory attention, can help address the continuation of human-machine bias.

Conclusion

AI functionalities clash with the Boolean epistemology of candidate search in professional recruiting. This encourages human intervention and enables continued employment bias and discrimination. Employment fairness is of enormous ethical importance, but HR recruitment is just one of many areas of life where AI has been implicated in bias and discrimination. Focusing solely on AI opacity as the cause of bias and discrimination misses the fundamental socio-technical nature of the phenomenon and points to ineffectual solutions.

We are in urgent need of more empirically grounded research on how AI is actually used so that we understand and address where and how bias and discrimination can occur in the distributed human-machine decision-system networks that influence important life outcomes. This is increasingly urgent with the rise of generative AI technologies such as ChatGPT and their rapid adoption. Focusing accountability approaches on practices, processes, and technologies rather than just machines or just humans, is a crucial first step toward building a just society.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories