1382 stories
·
1 follower

What Would Richard Feynman Make of AI Today?

1 Share

The first principle is that you must not fool yourself—and you are the easiest person to fool,” Richard Feynman said in a 1974 commencement address at Caltech. He wasn’t speaking as a lofty philosopher but as a working physicist, offering a practical guide to daily work. He had little patience for prestige, authority, or explanations that couldn’t be tested. “It doesn’t matter how beautiful your theory is, how smart you are, or what your name is,” he liked to say. “If it doesn’t agree with experiment, it’s wrong. In that simple statement is the key to science.” Students often giggled at first, but then became silent as it sank in.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Feynman was a man of contrasts—lively, irreverent, and deeply suspicious of explanations that sounded good but didn’t cash out in practice. Instead, he emphasized curiosity and intolerance for nonsense. And when things got too stuffy, he preferred playing the bongo drums. Feynman had a strong instinct to understand things by doing them, not by reading about them. Just as with physics, he didn’t want descriptions, he wanted participation. Curiosity didn’t need justification. And yes, he won the Nobel Prize in Physics. He invented a visual way to understand how light and matter interact—diagrams that let physicists see complex processes at a glance.

As a teenager, Feynman repaired radios without schematics. In his last act as a public figure, he exposed the cause of the 1986 Space Shuttle Challenger disaster. Despite being ill with cancer, he cut through NASA’s flawed reasoning, insisted on speaking to engineers rather than administrators, and demonstrated O-ring failure with a simple glass of ice water on live television. In his mind, fixing radios and explaining the Challenger disaster were the same problem. In both cases, authority had obscured reality, and a simple experiment was enough to reveal it. That way of thinking—forged long before machine learning and neural networks—turns out to be uncomfortably relevant today.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You can imagine Feynman being tempted to rise and ask a deceptively simple question: How do you know?

If Feynman were alive and wandering through our technological landscape, it’s hard to imagine him standing on a stage at an AI product launch. He disliked hype. He was suspicious of grand promises delivered before the details were understood—of applause standing in for questions. Instead of unveiling a finished product, he’d likely say something like, “I don’t really know what this thing does yet—that’s why I’m interested in it.” He might take the demo apart and break it, before fixing it and putting it back together. That alone would drain the room of hype and dampen the mood of anyone hoping for a smooth pitch, such as investors and stakeholders.

It is easier to imagine him sitting in the last row of a darkened auditorium, notebook in hand, watching carefully. On the screen, colorful animations glide past: neural networks glowing, data flowing, arrows pointing confidently upward, unencumbered by error bars, a demo that works beautifully, provided nothing unexpected happens. A speaker explains that the system “understands language,” “reasons about the world,” “discovers new knowledge.” Each claim is met with nods and polite applause. You can see Feynman being tempted to rise and ask a deceptively simple question: How do you know?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But Feynman, new to this spectacle, would wait. He would listen for the moment when someone explained what the machine does when it fails, or how one might tell whether it truly understands anything at all. He would notice that the demo works flawlessly—once—and that no one asks what happens when the input is strange, incomplete, or wrong. He would hear words doing a great deal of work, and experiments doing very little.

UTTER HONESTY: In his 1974 commencement speech at Caltech, Richard Feynman told students scientific integrity depends on “utter honesty.” Experiments should “try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.” Credit: Wikimedia Commons.

Artificial intelligence is being presented to the public as a transformative force—one that promises to revolutionize science, medicine, education, and creativity itself. In many ways, these claims are justified. Machine learning systems can detect patterns at scales no human could manage: predicting the three-dimensional structures of proteins, screening images of tissue and cells for changes, identifying rare astronomical signals buried in noise, and generating fluent text or images on demand. These systems excel at sifting through oceans of data with remarkable speed and efficiency, revealing regularities that would otherwise remain hidden.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Feynman would not have dismissed any of this. He was fascinated by computation and simulation. He helped pioneer Monte Carlo methods (simulating many possible outcomes at random and averaging the results) at Los Alamos National Laboratory, and computational approaches to quantum mechanics. Used well, AI can help scientists ask better questions, explore larger parameter spaces, and uncover patterns worth investigating. Used poorly, it can short-circuit that process—offering answers without insight, correlations without causes, and predictions without understanding. The danger is not automation itself, but the temptation to mistake impressive performance for understanding.

Much of today’s artificial intelligence operates as a black box. Models are trained on vast—often proprietary—datasets, and their internal workings remain opaque even to their creators. Modern neural networks can contain millions, sometimes billions, of adjustable parameters. One of Feynman’s contemporaries, John von Neumann, once wryly observed: “With four parameters I can fit an elephant, and with five I can make his tail wiggle.” The metaphor warns of mistaking noise for meaning. Neural networks produce outputs that look fluent, confident, sometimes uncannily insightful. What they rarely provide is an explanation of why a particular answer appears, or when the system is likely to fail.

This creates a subtle but powerful temptation. When a system performs impressively, it is easy to treat performance as understanding, and statistical success as explanation. Feynman would have been wary of that move. He once scribbled on his blackboard, near the end of his life, a simple rule of thumb: “What I cannot create, I do not understand.” For him, understanding meant being able to take something apart, to rebuild it, and to know where it would break. Black-box systems invert that instinct. They invite us to accept answers we cannot fully reconstruct, and to trust results whose limits we may not recognize until something goes wrong.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Feynman had a name for this kind of confusion: “cargo cult science.” In fact, that was the title of his 1974 commencement address. He described cargo cult science as research that imitates the outward forms of scientific practice—experiments, graphs, statistics, jargon—-while missing its essential core.

The term came from South Pacific islanders who, after World War II, built wooden runways and bamboo control towers in the hope that cargo planes would return. They reproduced the rituals they had observed, down to carved headphones and signal fires. “They follow all the apparent precepts,” Feynman said, “but they’re missing something essential, because the planes don’t land.” The lesson was not about foolishness, but about misunderstanding. Without knowing why something works, copying its surface features is not enough.

Feynman’s message was not that science produces miracles but teaches a way of thinking that resists dogma.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The risk with AI is not that it doesn’t work. The risk is that it works well enough to lull us into forgetting what science is for: not producing answers but exposing ideas to reality. For Feynman, science was about learning precisely where ideas fail. When performance becomes the goal, and success is measured only by outputs that look right, that discipline quietly erodes.

He loved technology and new tools, especially those that made it easier to test ideas against reality. He created visual tools, like his Nobel-Prize-winning diagrams, that simplified complex interactions without hiding their assumptions. But he was always careful to distinguish between instruments that help us probe nature and systems that merely produce convincing answers. Tools, for Feynman, were valuable not because they were powerful, but because they made it easier to see where an idea broke.

In Feynman’s view, science does not advance through confidence, but through doubt, by a willingness to remain unsure. Scientific knowledge, he argued, is a patchwork of statements with varying degrees of certainty—all provisional, all subject to revision. “I would rather have questions that can’t be answered,” Feynman said, “than answers that can’t be questioned.” This is in stark contrast to venture capital, which rewards bold claims. Corporate competition rewards speed. Media attention rewards spectacle. In such an environment, admitting uncertainty is costly.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But for Feynman, uncertainty was not a weakness, it was the engine of progress. “I think it’s much more interesting to live not knowing, than to have answers which might be wrong,” he said.

It’s tempting to think these concerns belong only to academics. But artificial intelligence is no longer confined to laboratories or universities. It shapes what people read and watch, how students are assessed, how medical risks are flagged, and how decisions are made about loans, jobs, or insurance.

In many of these settings, AI systems function less as tools than as institutional opacity—systems whose authority exceeds our ability to question them. Their outputs arrive with an air of objectivity, even when the reasoning behind them is clouded. When a recommendation is wrong, or a decision seems unfair, it is often difficult to know where the error lies—in the data, the model, or the assumptions embedded long before the system was deployed?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In such contexts, the discipline of not fooling ourselves becomes more than an academic virtue. When opaque systems influence real lives, understanding their limits—knowing when they fail and why—becomes a civic necessity. Trust depends not only on performance, but on accountability, whether their limits are understood, questioned, and made visible when it matters.

Stepping back, the issue is not whether AI will transform science. It already has. The deeper question is whether we can preserve the values that make scientific knowledge trustworthy in the first place. Feynman’s answer, were he here, would likely be characteristically simple: slow down, ask what you know, admit what you don’t, and never confuse impressive results with understanding. History has shown more than once that scientific knowledge can race ahead of wisdom. Several physicists of Feynman’s generation would later reflect on having learned what could be done long before learning what should be done—a realization that arrived too late for comfort.

In 1955, Feynman gave a talk called “The Value of Science” at a National Academy of Sciences meeting at Caltech. He said that science is a discipline of doubt, remaining free to question what we think we know. His central message was not that science produces miracles but teaches a way of thinking that resists dogma, false certainty, and self-deception. He opened not with equations or authority, but with a Buddhist proverb: “To every man is given the key to the gates of heaven; the same key opens the gates of hell.” Science, Feynman said, is that key, a tool of immense power. It can open both gates. Which way it turns depends on you.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

More on Richard Feynman from Nautilus

What Impossible Meant to Feynman

The Day Feynman Worked Out Black-Hole Radiation on My Blackboard

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Another Side of Feynman

Lead image: Tamiko Thiel / Wikimedia Commons

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
Read the whole story
mrmarchant
1 minute ago
reply
Share this story
Delete

Talk About How Much Money You Make

1 Share

My first tech job offer blew my mind. I had hoped I would make that much money one day, but I never thought I would with my first job. A few months into the job, I went on my first work trip. Another engineer and I spent a week at a client’s office, and we would spend each evening exploring the client’s city. After spending that time together, I felt like I could confide in him. As we were flying home, I nervously leaned over and asked how much he made. He quickly told me his pay; to my surprise, he was making 25% more than me for the exact same job.

I was frustrated with my company, and I immediately started researching what other engineers were making in tech. While I was well paid by non-tech standards, I was making less than half of what security engineering roles at comparable firms were making. I began applying for higher-paying jobs. My company claimed to be one of the compensation leaders in the industry, but the facts said otherwise.

I used several tools to see who the highest-paying companies were, and what they offered: Levels.fyi (a verified salary-sharing website for tech workers), Blind (an anonymous forum for tech employees), Fishbowl (a similar app that is more focused on consulting, banking, and law), and Reddit. These tools will rarely give you an exact number, but they are incredible at giving you expected ranges that can guide your negotiating strategy. A year and a half later, I got a job at a competing consulting firm for 50% more than I was making at my first job. Their initial offer was “merely” 30% higher than my prior pay, but my research and conversations with coworkers allowed me to negotiate a significantly higher offer.

a_real_society’s Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

With my new role, I was a position opposite the one I had been in on that flight. My research and negotiation meant that I was now one of the highest-paid people at my level. I continued to have discussions about pay with my coworkers, and often shared the data I found with them. This led to several people negotiating higher bonuses or moving to higher-paying roles at other companies. It was incredibly rewarding every time I learned that I had helped someone who was being underpaid.

While I made a lot more at this second job, it still wasn’t a top-paying company. I went on to make two more job changes in the next two years, both times getting over 50% increases compared to the previous role. For one of those offers, I was able to convince the company to improve my compensation by 20% over their initial offer, and this was achieved simply by knowing my market value. When they had called to give me their offer, I calmly told them I was hoping for X, because I was seeing that engineers at competing companies were making X for similar roles. Three days later, I had an offer for X in my email inbox. I very rarely had competing offers during these discussions, or if I did, they were so low that it wasn’t helpful to disclose them in negotiations.

The common belief that you need a competing offer to negotiate higher pay. This reflects a scarcity mindset, combined with a fundamental misunderstanding of the art of negotiating. The value of a counter-offer isn’t inherent, rather it is a direct representation of your market rate. Knowing that rate ahead of time can give you confidence in how you approach your career. Many people don’t do any research, or they use outdated resources like Glassdoor (whose data often lags behind market rates because it lacks verification and tends to skew toward older submissions); as a result, a job offer is the only chance they get to see their value. Instead of relying entirely upon competing offers, people could discuss compensation with coworkers or peers in the industry, thereby obtaining more reliable data sources than the initial job offer or an application form with an “expected salary” field.

Similar to telling a company “sorry, but company Y offered me 20% more,” you can tell the recruiter that your research indicated similar companies tend to offer 30% more. You can even break down the structure of base salary versus stock or bonus-based compensation, as many companies make it easier to get approval for non-base-salary increases. Any recruiter you speak with likely has substantially more experience negotiating tech salaries and knows far better than you what the market will pay. What can feel like an insanely high total compensation (usually a mix of salary and stock-based pay, with sign-on and annual bonuses sometimes included) expectation for most engineering candidates new to negotiating could still be an absolute steal for the recruiter. You do not have to worry about looking greedy when you are objectively citing accurate market data. People are often worried that they will be retaliated against for negotiating, but I have never had a company rescind an offer for negotiating, with or without another offer involved. Having competing offers will always be a valuable resource, but you can still approach negotiations confidently without one.

Subscribe now

Discussions with peers are especially valuable because there can be information asymmetry. I repeatedly see engineers with highly-sought-after specialties undersell themselves by mistakenly relying on compensation data for more generic roles. A great example is security engineering; my experience has shown me that most top companies value security engineers at a 5-15% premium versus general software engineers at that level. It makes sense, these security engineers usually have to pass similar interviews to developers, and have additional domain-specific interview rounds on top of that. Security engineering salary data is very sparse on sites such as Levels and Blind, but when I spoke to a coworker, they confirmed that the premium existed. I knew, going forward, that I could take the general software developer pay data and tack on the premium, and recruiters wouldn’t bat an eye.

While person-to-person talks are the best, I cannot overstate how valuable online resources are. I attribute tens or even hundreds of thousands of earned dollars to Levels and Blind. One thing to consider though: there is a selection effect on compensation data posted online, similar to any digital forum. People who post are not typical. Like Google reviews, most people tend to only discuss pay when it is particularly good or bad. That is why sites like Glassdoor, with a less conscientious user base and limited moderation, tend to have outdated or lower compensation data. The more average people that post their pay information, the better it is for everyone. I highly encourage you to pay it forward and provide any pay data you can after you get an offer.

On the surface, employers benefit when employees don’t communicate about pay (because they can pay them less), but that lack of communication can cause long-term issues with resentment, recruitment, and retention. Having employees that know their own worth increases team stability, and lets people focus on their outputs rather than whether they are being taken advantage of. Transparency, it turns out, serves everyone.

I have been thanked by at least five different coworkers because I was willing to have a potentially awkward conversation that ended up being valuable for both of us. If my friends had not negotiated their offers, and later discovered what their peers were making, they likely would have felt like I did in my first job after that honest talk on the flight. When that happens, any savings the employer makes on compensation tends to be eaten up by low productivity and/or recruitment costs (recruiting an engineer can cost 50-200% of their annual salary) when that employee leaves.

If I had never asked my coworker what he made, I might never have left my first job. I certainly would not have pushed myself so hard to apply to jobs and prepare for interviews. Many people dread discussing pay. That mild sense of discomfort is nothing compared to the rush of a successful negotiation, and the satisfaction of knowing you are valued and rewarded for your skills and knowledge.

Read the whole story
mrmarchant
4 minutes ago
reply
Share this story
Delete

The pop quizification of knowledge

1 Share

“A troop.”

“Of how many persons?”

“Twenty men.”

“What sort of men?”

“Sixteen pioneers, four soldiers.”

“How far distant?”

“Five hundred paces.”

When Alexandre Dumas published the first serialisation of The Three Musketeers in 1844, he was paid by the line. The story goes that to make the most of the deal, he therefore added the character of Grimaud, a servant who spoke only in short, clipped sentences. Thanks to exchanges like the above, between Grimaud and the similarly verbally economical Athos, Dumas boosted the money he made per word.

Unfortunately for Dumas, the publisher eventually caught on, and declared that short lines didn’t count. In response, Dumas promptly cut Grimaud from the story.

The way we assess progress can sometimes incentivise people in unhelpful ways. ‘When a measure becomes a target, it ceases to be a good measure’ goes the famous adage from economist Charles Goodhart.

Like Dumas, careers can be built on measures that have become targets. In his book The Accountability Machine, points out that the outrageously lucrative academic publishing industry emerged as a useful way of judging academics. Getting papers in ‘good’ journals – where it is more likely to be read and cited – is often the target, rather than journals being measured as ‘good’ because they’ve published the best work.

As Davies puts it in the book:

The model persists because somewhere along the way, the journal industry managed to insert itself into the staff promotion and recruitment function of universities all over the world. In doing so, it created an extremely useful accountability sink for senior academics and managers of universities, while also solving an awkward and unpleasant interpersonal problem for them – how to judge the quality of scholarship without offending the scholars.

Chasing answers

A few months ago, OpenAI published a paper digging into why AI models hallucinate so readily. Their conclusion was that some of the blame lay with incentives:

Hallucinations persist partly because current evaluation methods set the wrong incentives. While evaluations themselves do not directly cause hallucinations, most evaluations measure model performance in a way that encourages guessing rather than honesty about uncertainty.

In other words, LLMs often behave like someone trying only to ace the final exam, rather than think deeply about a problem. This is not just a behavioural quirk of LLMs. There seems to be a growing attitude among humans that the point of reading is merely to be able to pass a pop quiz at the end. Why read a book when you could just read the key crib bullet points? A few years ago, that might have meant using one of those summarisation apps that extract insights (but don’t pay authors). Now it typically means asking an LLM (which is much the same thing).

In a recent piece, observed that conversations can be subject to similar efficiency constraints in the land of tech:

In the Bay, most gatherings have the sweaty air of Purpose. Discussions are held to uncover new information, not because it is good to be around each other. Conversations feel like podcasts and the hosts are not funny.

Knowledge is increasingly being redesigned around extractable outputs: test answers, summaries, bullet points. But this misses the point. Books and conversations aren’t just tokens to be processed efficiently by our eyes and ears. They are journeys in thinking and experiencing. Journeys that can bring the serendipity and struggle of deeper understanding.

If you’ve ever read a great novel, you’ll know what it feels like to not want it to end. If there’s a sadness at the final page, it’s because a powerful experience is over. Not because you now have to go and take a quiz.


Cover image: Ed Robertson




Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Why Did the Rubber Chicken Cross the Road?

1 Share

For those about to “bawk,” I salute you. I might be navigating middle age with dad jokes, but Byard Duncan had a cluckin’ great idea for weathering his existential dread: serious training with a coop of rubber chickens, one of which he would attempt to throw more than 115 feet, to claim a place in the Guinness Book of World Records. After reading, I wouldn’t challenge Duncan’s throwing arm or his sense of humor.

Just as there is an art to throwing rubber chickens, so too is there an art to practicing the act without detection. In the lead-up to my Guinness attempt, I had taken to training in a local park’s tree-lined corner. The timing of these sessions was critical: I needed to get my reps in before elementary-age children arrived at the after-school program nearby and started asking tough questions: Who are you? Or: Why are you throwing rubber chickens? Or: Do you really think this will cure your nagging sense of abstract forfeit? In times of doubt, I called to mind advice from Cincinnati Bengals quarterback Joe Burrow, who years ago warned the next generation of athletes against turning their workouts into TikTok fodder. “Work in silence,” he advised. “Don’t show everybody what you’re doing.”

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

College Admissions are Nonsense

1 Share

College admissions are nonsense.

Not because colleges are malicious or because admissions officers are stupid, but because the process itself pretends to have a level of precision and moral seriousness it simply does not possess.

There is a study I would love to run someday if I ever get a PhD (I probably won’t).

Take a set of elite colleges and ask their admissions offices to re-evaluate last year’s applicants completely blind: no names, no legacy markers, no institutional memory of who actually enrolled. Then compare the newly admitted class to the one they actually chose. My guess is the overlap would be far closer to 50 percent than 100 percent.

If that’s anywhere close to correct, it tells us something important. The problem isn’t that admissions officers make occasional mistakes. It’s that “holistic review” cannot reliably distinguish among thousands of highly qualified applicants, even when applied by the same institution to the same pool. The differences between who gets in and who doesn’t are not fine judgments of merit.

Yet, colleges refuse to acknowledge this arbitrariness. Instead, they force students to behave as if every essay sentence, extracurricular choice, and leadership title might be decisive. This fuels an escalating arms race — one that rewards those with time, money, and inside knowledge. It also warps the lives of teenagers, who use their free time to build resumes rather than explore their interests, and become risk-averse learners.

There is a simpler and more honest alternative: set a clear academic threshold for readiness and then admit students by lottery.1

Join the Center for Educational Progress and receive all our content — and thanks to all our amazing paid subscribers for their support.


How Holistic Admissions Processes Reward Wealth, Not Merit

One of the most comprehensive pieces of research on the negative impacts of holistic admission on high-achieving, low-income students comes from the Jack Kent Cooke Foundation’s True Merit report.

Their findings reveal that the system of preferences and discretionary criteria used by admissions officers at selective colleges unfairly puts HALO students at a disadvantage against applicants with social and economic advantage.

Example #1: Donor and Legacy Preferences

  • Donor and legacy preferences are among the most straightforward ways wealth is converted into admissions advantage, and they illustrate the central flaw of holistic admissions more clearly than almost any other practice. In theory, these preferences are justified as tools for institutional stability or alumni engagement. In practice, they disproportionately benefit students from affluent families who already enjoy advantages across every other dimension of the application process.

Example #2: Advanced Coursework and GPAs

  • Federal education data shows that low-income students are less likely to have access to AP/IB classes - a key factor in “academic index” scores selective colleges use to screen large applicant pools. Because students enrolled in these classes are often able to have GPAs greater than a 4.0, it gives an edge to students from affluent schools.

Example #3: Standardized Test Advantages

  • Wealthier students are much more likely to enroll in test preparation courses and take the SAT/ACT multiple times, inflating their scores. While research on the benefits of test preparation courses is mixed, recent research from Harvard University shows that retaking the SAT substantially improves scores and increases four-year college enrollment rates. Often, only wealthy students are able to take advantage of these opportunities.2

Example #4: Demonstrated Interest

  • Selective colleges reward students who visit campus or engage in recruitment events. One reason for this is that it increases the institution’s yield rate, i.e. the percentage of applicants granted admission that actually enroll. These activities are often inaccessible to low-income students due to travel costs or work obligations.

Example #5: Early Decision/Action

  • Applicants that submit early decision or early action applications have significantly higher admission rates. However, low-income students are much less likely to apply early because they must compare financial aid packages before committing. This disadvantages them for admissions through no fault of their own.

Example #6: Student Essays

  • A 2021 study from Stanford’s Center for Education Policy Analysis found that application essays were more strongly correlated with household income than SAT scores, meaning essays encode socioeconomic status through vocabulary, themes, and narrative style. This matters because essays are treated as signals of depth, curiosity, and fit, when, in reality, they often signal access to coaching and cultural knowledge about what admissions officers want to hear.

Example #7: Letters of Recommendation

  • In theory, letters of recommendation provide admissions officers with a character assessment of the applicant. In practice, however, they reward students who attend schools with low guidance counselor caseloads, experienced college-savvy teachers, and institutional familiarity with selective admissions norms. In contrast, high-achieving low-income students often attend schools where counselors manage hundreds of students, teachers are stretched thin, and recommendation letters are brief, generic, or procedural due to sheer caseload.

Example #8: Extra-Curricular Activities

  • Extracurricular activities are often treated as evidence of passion, leadership, and initiative, but in holistic admissions, they function largely as proxies for time and money. Many of the activities admissions offices reward — sport travel teams, research internships, unpaid summer programs, or sustained leadership in clubs — require flexible schedules, transportation, and resources.

The Holistic Admissions Rat Race

Holistic admissions changes how teenagers spend their lives.

When colleges insist that every element of an application might matter, they create a system in which marginal effort always seems potentially decisive. This is the core mechanism of the admissions rat race.

If there is no clear bar and no honest admission that many applicants are indistinguishable from each other, students are rational to assume that one more activity or one more line on a resume could be the difference between acceptance and rejection.

This mindset reshapes how students learn, think, and spend their time.

Teenagers become risk-averse learners, avoiding challenging questions or courses, where an A is not guaranteed. During my limited time as a teacher early in my career at a private school in the suburbs of Chicago, I routinely witnessed bright students choose assignments that were easier-to-answer even when I rewarded more potential points for more difficult questions. Why risk the hit to a GPA to scratch the itch of curiosity?

This mindset also limits teenagers’ time for creative exploration. Instead, they participate in activities they do not enjoy, but are beneficial in building a competitive application.

All of this can take a significant toll. Research suggests that when students are overloaded with academic and extracurricular commitments, the marginal cognitive benefit of extra effort flattens out while the negative outcomes (i.e. anxiety, sleep deprivation, etc.) rise. In other words, students are not just “working hard”: they are over-optimizing for an opaque and hyper-competitive signals game, at the cost of rest, curiosity, and mental health.

When admissions decisions hinge on subtle, subjective differences, students internalize scarcity and anxiety as foundational truths about education, rather than as byproducts of a flawed selection process.


Why a Lottery is Fairer and Healthier

A lottery admissions system would be fairer for HALO students because it dampens the advantage of wealth. By setting a clear academic threshold, such as a minimum ACT/SAT score, colleges would identify a pool of students who can succeed. Once that bar is met, selection by lottery acknowledges an uncomfortable, but honest truth: beyond readiness, differences among applicants are often too small, too subjective, or too context-dependent to rank meaningfully. For HALO students, this matters enormously.

A lottery would also change how many teenagers live their lives. When students know that meeting a clear academic bar is sufficient, the incentive to endlessly optimize disappears. Free time no longer has to be admissions-relevant; learning no longer has to be risk-free. Students can take harder classes without fearing that a single low-grade will sink them, explore interests without worrying whether they count, and spend time working, resting, or helping family without feeling they are losing an invisible race.

More importantly, a lottery tells students that they didn’t personally fail if they don’t get into their first-choice school. In a lottery system, rejection reflects scarcity, not deficiency. Students who meet a clear academic bar can understand that they were qualified, but not selected — and not secretly judged and found wanting. Making luck explicit strips rejection of its moral weight and weakens the idea that every outcome is a referendum on one’s self-worth.

From Here to There

I’ve worked in public policy long enough to know that this post will not suddenly convince selective colleges to abandon admissions criteria they’ve relied on for decades. But, I also actually want to change how the college admissions process works — and I have a track record of doing so when systems become indefensible.3 If we care about fairness rather than rhetoric, there are concrete steps we can take through public policy and sustained advocacy to make the admissions process more like a lottery.

Example #1 - Prohibit Legacy and Donor Preferences in College Admissions

A growing number of states have passed laws prohibiting admissions officers at public universities from considering legacy or donor status when assembling an incoming class. California has gone further, enacting a law in 2025 that extends this prohibition to private, nonprofit colleges and universities operating in the state. That matters. Legacy and donor preferences are among the most explicit ways wealth is converted into admissions advantage, and banning them is one of the rare reforms that improves equity without introducing new distortions. Expanding these laws to more states is a clear step in the right direction.

Example #2 - Repeal Test-Optional/No-Test Policies

This is an area where California has gotten things exactly wrong. In an effort to address equity concerns surrounding standardized tests, the state has instead entrenched an admissions system that disadvantages HALO students. Eliminating the SAT and ACT does not eliminate wealth advantage; it amplifies it by shifting weight onto essays, recommendations, and extracurriculars — all of which are more strongly correlated with family income than test scores. Worse, research shows that high-achieving, low-income students are less likely to submit test scores even when they perform well, meaning test-optional policies systematically suppress one of the few signals that can help them compete.

Example #3 - Participate in the Federal Tax Credit Scholarship Program

Most advocates supported the federal tax credit scholarship program because it allows scholarships to be used for private school tuition. But an under-reported feature is that the program also allows scholarships to cover non-tuition educational expenses, including tutoring, SAT/ACT preparation, and exam fees. For HALO students competing against peers who can take standardized tests multiple times with tailored instruction, these supports are equalizers. Access to high-quality test prep and the ability to sit for exams without financial stress affects outcomes, and this program directly addresses that imbalance.

Example #4 - Pass State-Based Education Expense Scholarships or Credits

If governors or legislatures are uncomfortable participating in the federal program, states can pursue similar goals on their own. State-based education expense tax credits or scholarships — particularly those targeted to low- and middle-income families — can be used to offset the costs of tutoring, test preparation, and other enrichment activities. These policies need not include private school tuition to be effective. Several states, including Illinois, already operate programs along these lines. Properly designed, they reduce the role of wealth at the margins of competition rather than pretending it doesn’t exist.

Example #5 - Pressure Campaigns by Alumni and Donors

Not every problem has a legislative solution. Admissions offices are acutely sensitive to alumni and donor sentiment, and that reality can be leveraged for reform. Coordinated pressure campaigns can push institutions to drop legacy and donor preferences, adopt clearer admissions thresholds, or pilot lottery-based selection among qualified applicants. Colleges often justify inequitable practices as “institutional necessities,” but those arguments weaken quickly when a sufficient number of alumni and donors demand change.

None of these reforms will single-handedly fix college admissions. But, taken together, they would reduce the degree to which wealth masquerades as merit, lower the stakes of resume optimization, and make the process more honest about uncertainty and scarcity. That is not radical. It is what fairness looks like in a system that has spent far too long pretending it can finely rank students when, in reality, it cannot.


Selective college admissions has become an exercise in false precision. Faced with thousands of academically capable applicants, institutions insist on ranking the unrankable, leaning on criteria that quietly reward wealth and inside knowledge while pretending to measure merit. The result is a system that is neither fair nor honest. It is one that disadvantages HALO students, fuels an exhausting rat race, and teaches teenagers that college admissions outcome is a referendum on their worth.

A lottery system offers a better alternative. By setting a clear academic standard for readiness and then using a lottery to allocate scarce seats, selective colleges would acknowledge what they already know, but rarely say out loud: beyond a certain point, many applicants are equally qualified. This approach is fairer because it limits the returns to wealth at the margins. It is healthier because it changes incentives, allowing teenagers to learn deeply, take intellectual risks, and spend their time on pursuits that matter to them. And, it is more humane, because it makes room for luck as an unavoidable feature of life.

1

If I was in charge, I’d make it a minimum SAT/ACT score.

2

How can you say that selective colleges should only use SAT/ACT scores to select students and then argue that these tests advantage the wealthy? Read until the end to find out.

3

I wrote and passed Illinois’ Accelerated Placement Act, a statewide policy that allows students to enter early, take above grade level coursework, skip grades, and graduate early, after meeting and talking with families who desperately wanted to send their students to public schools, but felt like they couldn’t because the refused the schools refused to meet their child’s educational needs.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Milk Is Not $20

1 Share
Milk Is Not $20

I was listening to a surprisingly combative interview with Razer CEO Min-Liang Tan yesterday and amidst all the usual, expected stuff--like his general AI delusions and bizarre evasion of the question "what video game are you playing right now?"--I was struck by one response in particular, about subscription costs.

Gamers will learn to love AI, says Razer CEO
Razer’s Min-Liang Tan on AI backlash, partnering with Grok, and the future of gaming.
Milk Is Not $20

Asked by The Verge's Nilay Patel whether he truly sees value in subscribing to AI platforms, Tan says he sees value in all kinds of subscriptions. Here's a transcript of the part I'm talking about:

Patel: Are you seeing signs that the AI stuff is gonna be worth paying for that way? I mean, this is the bubble. We’re gonna spend all this money, we’ll forward invest in all this infrastructure. We’re gonna skyrocket the price of RAM and GPUs, and then at the end of the day, people are going to say, “That’s not actually worth the 20 bucks a month.”
Tan: I don’t necessarily see it as AI, per se, but I see the kind of value that I get out of it. So, for example, a ChatGPT subscription, or a Grok subscription, for that matter. I do see value in it, and that’s why I pay for it. And that’s the way I see it. I don’t see myself as paying for AI, per se. I see it as what am I getting out of a chatbot, for example, that can advise me on travel matters, health matters, whatever it is, my day-to-day, and stuff like that. Is that worth 20 bucks to me?
Because you’re a billionaire, right?
Sure.
I’m just saying, the marginal cost is meaningless to you.
20 bucks is still 20 bucks, that’s right.
I’m just saying.
Right.
I think for a lot of people, that is meaningful, especially stacked on top of all the other money they pay. Basically, I’m saying, do you see that critique of AI as a bubble? That the investment has not yet delivered the value that will make it so obvious that the investment’s worth it?
So I see that. I mean, huge amounts of investments are going into it. We are investing in AI, I think, as we speak. But I do see the potential at this point. In many cases, I mean, look at the number of paid subscribers for ChatGPT, for example. People do see the value in terms of whether it’s a chatbot AI, so on and so forth. I do think the potential is going to be realized.

"Because you’re a billionaire" has stuck with me. Tan is one of many billionaires at the vanguard of the AI push, empty men whose only reference point and purpose in life is seemingly to make a line go up for as long as possible. And that push, as it stands, is almost entirely reliant on platforms like ChatGPT and Grok going from free, idle curiosities to something millions of people pay a substantial amount of money for every month.

I am not a billionaire, but I am a guy reliant on people paying a monthly subscription for something people used to get for free, and let me tell you: good luck guys. At its height in the 2010s, I was writing for an audience at Kotaku that could number between 20-30 million people every month. I am now co-founder of a website that, asking for $7 in return for video game blogs, has 5000 paying subscribers (thank you everyone!).

That math works out for me because I am a normal person with normal person obligations like bills and groceries. It absolutely does not work out for companies that are on the hook for billions in data centre and loans, let alone lawsuits and licensing disputes. To even entertain the thought that the AI bubble could be propped up in any way by subscription costs, let alone commit your entire business model to it, is insane to a normal person!

But these aren't normal people. They're billionaires. And hearing Tan say "I do see value in it, and that’s why I pay for it", oblivious to what subscription costs (and fatigue) means to the average person on the street, made me think of this famous (if you're into basketball) line from Dwyane Wade's wife Gabrielle Union, where she explains how the multi-millionaire former pro athlete had no idea how much milk cost when he retired:

Wade's wife Gabrielle Union says he doesn't know how much milk costs or what happens at car washes: "He has no idea how much milk costs. He's like 'what is that, like $20?'"
by u/deadskin in nba

That's a cute story! Dwyane Wade seems like a nice guy, and I'm sure he now knows exactly how much milk costs. He was not, like Tan, a billionaire entirely divorced from the common human experience, trying to force a lake-boiling, job-displacing plagiarism machine on the world against its collective wishes.

There are a lot of ideas going around right now about how we can best curb the power of the billionaire class. Capping personal incomes, increasing taxes and tighter regulation of the share markets are all good places to start, but maybe I can add another one on top (they'd have to do this as well as pay more tax): every person who earns over 999 million has to spend a year stuck with the rest of us, doing what amounts to community service, in so much as it forces them to be part of a community. They have to draw from a median salary, pay a mortgage, feed kids, commute to work and navigate the healthcare system, without their billions or associated support network to bail them out.

Let's see what dipshits like Tan think about subscription costs for AI slop when $20 is three hours' work. If that job hasn't been replaced with AI already.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories