1684 stories
·
2 followers

The McNamara Fallacy

1 Share
You never know where it may lead. Also known as the quantitative fallacy. Prints of this

Hello!

Today’s email is a little longer than usual—though I like it and it has pictures. Feel free to just stay for the sketch as always.

Jono

Sketchplanations is reader-supported. Paid subscriptions help me keep sketching.

The McNamara Fallacy is a belief in easy-to-measure quantitative metrics at the expense of ignoring hard-to-measure qualitative factors.

Robert McNamara was president of Ford Motor Company and later the Secretary of Defense for the USA during much of the war in Vietnam. He was highly intelligent and excelled at dealing with data and using it to inform strategy.

Coined by social scientist Daniel Yankelovich, the McNamara Fallacy, also called the Quantitative Fallacy, involves:

  • Prioritising easier-to-measure quantitative metrics and

  • Disregarding hard-to-measure qualitative metrics as unimportant

The fallacy can progress as follows:

  1. Measure what can be measured

  2. Disregard what we can’t measure

  3. Assume what cannot be measured is not important

  4. Conclude that what can’t be measured doesn’t exist

In the words of Yankelovich:

“The fallacy is: If you’re confronted by a complex problem that is full of intangibles, you decide to measure only those aspects of the problem that lend themselves to easy quantification, either because you find other aspects difficult to measure or because you assume that they can’t be very important or don’t even exist.” …

“It is a short, fatal step from the statement, ‘There are many intangibles and imponderables that we can’t put on our computers,’ to the statement, ‘Let’s measure what we can and forget about the intangibles.’”

Yankelovich cites working with Ford during McNamara’s time and sharing research that included both quantitative and qualitative factors. As the research was assessed, the qualitative data on the meanings people gave to small cars were discarded, and the less significant quantitative and demographic data were retained.

Examples of the McNamara Fallacy

I first learned about the fallacy from a reader’s article about one of the easiest to measure aspects of a bike: its weight. All things being equal, we’d probably prefer a lighter bike, but other aspects like maintenance, reliability, and handling are important yet harder to measure, report on and compare. As a result, weight often comes to the fore at the expense of the others.

Other examples might include:

  • Perhaps your hiring time is down, but how is the fit of the people you’re bringing in?

  • Maybe more people are visiting your website, but they aren’t the right people for your service.

  • We can calculate a country’s GDP, but GDP doesn’t account for human labour without a monetary transaction—like a home-cooked meal—and vital work done by nature, like filtering water, sequestering carbon, or lifting spirits.

  • If food in a can gives you all the nutrients you need, what are you missing by skipping family meals?

A commonly cited example of the McNamara Fallacy is the US military’s approach to measuring progress in the Vietnam War.

The McNamara Fallacy and the Vietnam War

As the US Secretary of Defense from 1961–1968, McNamara employed similar techniques to those he had used successfully in business contexts to assess the progress of the war in Vietnam.

If wars were won by inflicting damage on the enemy, then metrics measuring the damage inflicted should be decent proxies. In particular, assessing body count evolved to be the primary measure of progress.

Reliance on purely quantitative metrics had significant shortcomings. In this context, enemy body count was more easily quantified than their morale, political support, or resolve to defend. Because of the interest in this measure, many officers believed that it was also often inflated (see Goodhart’s Law and Campbell’s Law below).

Measuring the tons of explosives dropped is easier than measuring the reduction in capabilities they caused. Knowing the number of troops you have is easier to measure than the abilities of those troops.

According to the numbers, the U.S. was winning the war, yet it failed to overcome the resistance of North Vietnam.

Related Ideas to The McNamara Fallacy

An over-reliance on quantitative metrics quickly leads to several other related problems to deal with:

Goodhart's law illustration showing a manager frustrated by 1000's of tiny nails when measuring on number of nails made, and pulling their hair out when presented with giant nails when measuring on weight

Goodhart’s Law: When a measure becomes a target, it ceases to be a good measure.

For example, assessing the progress of war on the numbers of enemy dead may lead to increased killing to inflate numbers. Or pinning a bonus on review ratings may lead to fake reviews.

Campbell's law illustrated with examples from elections and leading to fake news and a crackdown on crime distorting how it is reported and measured

Campbell’s Law: The more a metric counts for real decisions, the greater the pressure for corruption, and the more it distorts the situation it’s intended to measure.

For example, if reducing crime rates matters for law enforcement jobs, it creates an incentive to under-report cases or downgrade crimes.

Looking under the lamppost, the streetlight effect, or the drunkard's search: a person asks someone scrabbling on the floor under a lamppost at night if they've lost their keys. The person replies they lost them elsewhere, but the light's much better here.

The Streetlight Effect: Looking where it’s easiest to look, rather than where it matters. Also known as the drunkard’s search or looking under the lamppost, the Streetlight Effect is named after the economists’ joke of a person scrabbling on the ground for their car keys under a street light. When asked where they lost them, the person says they dropped them “over there,” but the light’s much better over here.

For example, optimising an existing product because it is known and does well, rather than working on an uncertain new product.

A fisherman illustrates the parable of the fishing net by concluding a minimum size of fish because they never see any smaller in their nets — you get what you measure

You Get What you Measure: The instrument you use to measure affects what you see.

For example, in school it is easy to measure training and hard to measure education, and hence you tend to see on final exams an emphasis on the training part and a great neglect of the education part.”—Richard Hamming

What is the Cobra Effect? A visual explanation showing how a plan to pay for dead cobras in India backfired when people began breeding cobras to claim the reward.

The Cobra Effect: When an implemented policy backfires causing the opposite of the intended outcome. Named after a British attempt to reduce cobras in Delhi by introducing a bounty on dead cobras. Seeing the lucrative bounty, people started farming cobras thereby increasing their numbers.

For example, the Streisand Effect is a well-documented case of a cobra effect when Barbara Streisand tried to suppress images of the coast that included her house. The attention drawn to her attempts to suppress the images brought thousands of people to look at them who otherwise would never have considered it.

The Law of Unintended Consequences example explained - when a simple system regulates a complex system

And all of these are examples of the broader Law of Unintended Consequences, which comes from trying to regulate a complex system with a simple system.

If the ideas above aren’t enough, you might also see:

For a super discussion on the measurement of all sorts of things, including acute (being run over by a bus) and chronic risks (smoking), I recommend the fun podcast episode: Microlives & The Art of Uncertainty with Sir David Spiegelhalter

The article about bike weight is: The McNamara Fallacy and Bikes by Peter Verdone

Daniel Yankelovich, who named the McNamara Fallacy, stressed no disrespect to “one of our most distinguished citizens,” Robert McNamara, “a brilliant and dedicated man who brings a vital intensity to bear on his work.”

Some of the complexities of the war and McNamara are covered in the Academy Award-winning documentary The Fog of War, which makes fascinating and, at times, uncomfortable viewing.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

Pivoting Edtech Towards Humanity

1 Share

With AI tutors underperforming the expectations of their creators and kids feeling increasingly negative about artificial intelligence, there is an opportunity and a mandate to pivot edtech towards humanity. What does that mean and what would it look like? Here is how I think about it.

Pivoting edtech towards humanity means using the power of technology to align one human’s desire to learn with another human’s desire to teach.

For Example

The physical classroom.

A bunch of people want to learn. A bunch of people want to teach. Until we connect them in time and space, those people are misaligned, their desire wasted. The technology of the physical classroom brings those desires into alignment.

Me, teaching in front ofa projector with a kid watching a video, deep in thought.
Me, teaching in what appears to be the 17th century.

Digital media.

As a new teacher, I noticed kids had a desire to learn how math connected to the world outside the classroom. I had a desire to teach them about those connections but very little ability to do so because the world outside of the classroom was outside and we were inside. The technology of digital projectors and cameraphones let me capture the world outside the classroom and bring it inside for our analysis. Technology brought our desires to learn and teach into alignment.

For Counterexample

If you ever have the feeling that edtech isn’t all that interested in humanity, it’s frequently because:

Edtech companies try to align human desire to the power of technology.

Edtech companies often take a particular technology as the answer and then retrofit teaching and learning into the question. This is why, for years, various companies insisted that teaching is something very close to playing a video of an explanation, which makes “just play YouTube videos” seem like the answer to the question “why is teaching hard?”

Edtech companies ignore second-order effects.

A student feels like their class is moving a little slower than they’d like. An edtech company then suggests having every kid learn on computers which let them work at different paces. The company ignores the second-order effect that “kids also like learning together and now they can’t.”

Misalignments That Interest Me Currently

Kids want to work on paper and it’s hard for teachers to know what they’re doing.

Teachers struggle to support a student’s thinking if it isn’t visible. Lots of thinking happens on paper and teachers often lack the time necessary to review and respond to it. How can we make that paper-based learning more legible for more teachers?

Coaches want to support teachers and teachers want their support.

Coaches often have too many teachers on their roster to support with model lessons and walkthroughs. Also, many teachers want support but not in the form of a model lesson or walkthrough. The desire to give and receive support are misaligned here.

Teachers want support in leading whole-class discussions.

Whole-class discussion is some of the most satisfying work for teachers and productive learning for students. But it is very hard work. Misaligned.

Caregivers want to support their kids but don’t know how.

Many parents and caregivers want to do more to support their kids’ learning than they do currently. But they often lack visibility into student learning and may need some education themselves. The reports that schools send home are frequently summative, low resolution, and a waste of ink or pixels overall. Teacher emails are much more useful but time-consuming for the teacher. What can schools and edtech companies do to help align caregivers, teachers, and kids here?

Who will do this work?

I could point to dozens of people doing this work of pivoting edtech towards humanity. They are exceptional. Many of them are my coworkers. Common to each of them is an excitement for new technologies and a desire to understand the work of teachers and the lives of students that I can only describe as “insatiable.” If that’s you, let me know what you’re working on in the comments.

Get a new post about teaching and technology on special Wednesdays. -Dan 👇

Featured Comments

My obituary for Khanmigo and AI tutors inspired so many of you to share your own stories of grief. This newsletter is here for you. This was a common interesting interaction. From deep within their grieving process, someone would ask:

What’s the alternative? If Khanmigo doesn’t work, what’s the Plan B?

I’d point out the effect of Saga’s tutoring interventions in Chicago Public Schools as one of many interventions that has had a positive effect on student learning. Still grieving, this person would respond that this intervention is “difficult to implement at scale.”

This is such a strange standard for evaluating interventions in education. It is very true that good things are often difficult and expensive while useless things are often easy and cheap. Many people mistake this fact as an argument for doing useless things! (My colleague Chris Blackett develops this idea more at Talent Lab.)

Mae Baltz mentions another kind of subsidy Khanmigo frequently received: administrative mandate.

As a teacher in one of the areas that received money for Khanmigo I was asked to have my students interact with Khanmigo at least 10 times per month (each student).

In spite of those mandates, Khan Academy reported yesterday that “only around 15% of students who have access to Khanmigo engage with it.” That indicates pretty serious misalignment.

Katelynn Petersen describes the difference between AI and human tutors. Please write this down somewhere!

As soon as I started hearing about tutors being replaced by AI, I knew that the people responsible for such nonsense had never tutored a day in their life. 40% is remembering to ask about the novel they are writing, the tea they spilled about their friends, or the language test they’ve been studying for all year. 40% of my time is spent just building confidence and reassuring students they’re doing the right thing. 20% is actually teaching math.

Odds & Ends

¶ I have very little to say about Khan Academy’s new venture, announced just before I posted my obituary last week. An edtech visionary is stymied by traditional education and retreats to the friendlier terrain of corporate e-learning. Am I talking about the $10,000 degree Khan Academy will offer in partnership with ETS and TED? Or am I talking about the $7,000 degree Udacity offered in partnership with Georgia Tech after their disastrous experience trying to support college freshmen 13 years ago. Answer: yes. Hop in the time machine, kids. NB: Read Glenda Morgan’s pre-mortem or John Warner’s polemic.

Marc Watkins writes about the same crisis of purpose in higher ed that I am seeing in a local eighth-grade class.

It’s easy to dismiss lazy students or burned-out teachers turning to AI, as many seem to do in the comment sections of social media posts, where we hear a litany of solutions from folks that range from bluebooks to oral exams to entire technology bans. But AI isn’t simply a crisis in assessment. No, the true crisis here is purpose.

¶ A couple of tremendous writers and thinkers take on the AI chatbot tutor’s promise of “infinite patience.” First, John Warner talks about his gratitude for the finite patience of his teachers.

Some of my most important formative educational experiences involved some teacher or authority figure losing patience with me.

Second, Julia Freeland Fisher asks, in a world of infinite patience, “whose while are you worth”:

AI’s champions often laud it as “infinitely patient.” AI’s unerring support is undoubtedly powerful, especially when time and resources are scarce. But it falls short of the experience that accompanies real patience: not just material support, but the feeling you are worth someone else’s while.

¶ I’ve worked in curriculum development for over a decade and this comment from Stanford’s Sam Wineburg on Justin Reich’s podcast earlier this year is a really rare insight.

Ultimately, curricula are not for kids. Curricula are for the teachers. And if the teachers don’t feel exuberant and don’t feel ennobled by being the mediators and the adapters of those curricula, they can be the best and most thought-out curricula in the world, but they’re ultimately going to find dust on some shelf.

¶ I’ll have more to say about the digital backlash someday. For now, I’ll let it suffice to say three things.

  1. I think the coalitions that are forming are among the wackiest I’ve ever seen on any issue.

  2. As a parent, I wish schools would scrutinize their use of edtech more closely.

  3. I think the edtech companies that understand teaching and learning, that prioritize the humanity of teachers and learners, are probably going to be fine.

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

Why we need shared private internets

1 Share

Once upon a time all conversations were private.

Two people in dialogue. A small group in a debate. What was said only left the room if someone decided it should.

For all of human history until two decades ago, private communication was the default. Words traveled as far as our voices could carry them. Or, if we chose, we wrote them down in formats with the intention of distribution.

The internet changed this. What was shared between two people could now be observed and reshared by the world. Our feelings, personal information, and who we talked to became data distributed to corporations, governments, and agencies beyond our control. With our explicit-ish consent on social media, and without it through widespread surveillance in support of ad-based models and suspicious states.

We've gone from a world that's default private to a world that's default public. Lured by the ever-present promise of infinite scale, human interaction shifted from private exchange to public arena. Even when we think what we're doing is private. No one agreed to this except through byzantine terms of use none of us read.

The performative cultures, insincere provocations, and broken social contracts that surround us today are the result.

alt

Why we need dark forests

This was the tension I felt in 2019 when I wrote "The Dark Forest Theory of the Internet." I'd grown up on the early web where message boards, blogs, and a less centralized internet made a degree of privacy the assumed state of things. But the internet I wrote about seven years ago had become something else: a battleground where being visible meant being exposed.

Back then I observed that I and a growing number of others were moving "into dark forests to avoid the fray." Dark forests were group chats, private spaces, and podcasts that had more in common with the physical world than with the internet we knew. Wary of being watched, afraid of being taken out of context, and tired of threats from trolls and adversaries, people changed how they showed up.

That trend has accelerated. We now live in a world where everything public feels like an ad and the only things that feel real are private. We use the public internet to make money and achieve social status, accepting that being open online also means being less safe.

Why we need privacy

There's a powerful book from 2020 by Oxford philosopher Carissa Veliz called Privacy Is Power that explains why this matters. She writes:

"Privacy is about being able to keep certain intimate things to yourself — your thoughts, your experiences, your conversations, your plans. Human beings need privacy to be able to unwind from the burden of being with other people. We need privacy to explore new ideas freely, to make up our own minds. Privacy protects us from unwanted pressures and abuses of power."

What I felt personally, Veliz diagnoses politically. When the primary means of distributing information become less safe, we all suffer. Through the demands of public performance, tyranny reigns.

What's true of individuals is also true of groups. Trust requires believing that values and desires are shared. New ideas need to be debated privately before being scrutinized publicly. Ideas among people can grow even bigger when they aren’t immediately broadcast to all.

We've gotten so used to the default of thinking infinite scale is a good thing, we've come to think of privacy as a bad thing. But wanting privacy isn't a retreat. Craving more space for your ideas doesn’t mean you're doing something dangerous. Privacy is about having agency over your own thoughts and words, and how far they spread. Privacy is about control.

alt

The missing middle

DFOS is an attempt to build the missing middle. The space where creative projects and culture get made (see earlier essays here and here). A space for the groups and communities that the current internet doesn't understand because they don't want to play today's attention games. Spaces that feel more like the physical world than the scaled internet we increasingly avoid.

We think of these as shared private internets. De-scaled spaces that take advantage of all the powerful tools of the web while intentionally limiting unwanted reach. 

We're using software to make this, but the goals aren't technical. They're applied philosophy. The values and experiences of how we want to live — safe, secure, connected — expressed as implements that help us get there.

alt
DFOS Private Alpha 2.0 — shipping next week

When we're stuck between what we don't want and what we can't have, somebody needs to break the dam. Probably a lot of somebodies. Eventually, instead of choosing between one or the other, there’s a better third thing. And just maybe, over time, a world where choosing more privacy doesn’t mean choosing less power.

We need shared private internets. Not because we want more technology or scale. Exactly the opposite. We want more time with each other. We want to feel safe. We want to be ourselves, whoever we are.

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

“I believe in an old-fashioned virtue called Doing...

1 Share

“I believe in an old-fashioned virtue called Doing the Freakin’ Work. Read the book, not the summary. Write the piece, not the prompt. Suffer like the artist you are. It ain’t easy, but if it were easy, it wouldn’t be worth doing.”

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

The Scapegoat

1 Share

Yes, AI is changing things in the corporate world, but let’s be clear: The humans are driving the actual change. McClatchy proves it.

The Scapegoat

McClatchy is a company that screams legacy. Nearly 170 years old, it has acquired a number of significant newspapers over the years, most notably in 2006, when it acquired the iconic Knight Ridder chain.

It is a company that has faced many challenges over its long history, notably filing for bankruptcy around the time of the COVID-19 outbreak. Even after merging with the former owner of the National Enquirer (really), it is barely holding on, and plus it has to figure out this whole AI thing.

One of my favorite metaphors is the idea of using a wrench in place of a hammer. It technically works, but it’s not the right purpose. AI tools are often the wrench of technology. And McClatchy just found its wrench.

According to The Wrap (paywall), the chain is pushing its journalists to use AI tech to repackage content in multiple directions. The technology was sold to the employees as Grammarly on steroids, and the hint seems to be that those who don’t accept this technology will be on thin ice career-wise.

“Journalists who embrace and experiment with this tool are going to win,” McClatchy VP of Local News Eric Nelson said recently, per the publication. “Journalists who are defiant will fall behind. Bottom line: We need more stories and we need more inventory.”

McClatchy is effectively using Claude to take already-written stories, repackage the reporting, and reuse it in whatever ways are necessary. Put another way, the company is trying to scale up for the arms race that is SEO, social media, and Google Discover.

The problem is, that means that these journalists are now going to have their bylines on content that AI actively wrote and repackaged, while attempting to limit the say those journalists have in the matter. From the piece:

Kathy Vetter, McClatchy’s chief of staff for local news, said during the March 17 meeting that the company’s general policy was that reporters who cannot revoke the use of their bylines must keep them attached to CSA-produced stories. For those who can revoke their byline, she said, McClatchy will still use their work anyway.

“We have every right to use their work,” she said, according to multiple sources familiar with the meeting. “It belongs to us, and if an editor wants to go … in there and repurpose a reporter’s content, they can put their name on it.”

Unions have gotten involved, limiting how those bylines get used, but not every paper has a union.

Looking for a little help in figuring out your approach to productivity? If you’re a Mac user, be sure to give Setapp a try. The service makes available hundreds of apps that can help you get more focused, simplify complex processes, even save a little time—all for one low monthly cost. Learn more at the link.

Robot_Hand_laptop.jpg
When you use AI, one hand is always robotic. (photos via DepositPhotos.com)

An unwanted byline introduces murky questions

What’s fascinating about the Wrap piece is the divide between journalists and executives that it exposes. VPs and business staffers seem excited about the opportunities this opens up. Journalists are upset that their names are going to be associated with work they didn’t actually write.

I’m not a lawyer, but the decision to essentially force non-unionized employees to include their bylines on pieces they didn’t write feels like it could be legally risky to me. Let me pose a scenario: Let’s say one of these LLM stories gets something wrong, and a journalist gets strong pushback on social media about the story, maybe even death threats, even though they didn’t write it. Does that put the newspaper at risk of a lawsuit from their own employee? Given our current culture, that does not seem far-fetched.

There are other risks, too: Imagine a defamation lawsuit against a journalist based on an error AI introduced, for example. And for readers, it might introduce a misrepresentation risk that gets a regulator like the Federal Trade Commission to weigh in, potentially even restricting the use of AI in news content. The parallels to the Wild West of early adtech are hard to miss.

If it was the government forcing this situation, that byline might even be seen as “compelled speech,” though employers have a lot more leverage. Nonetheless, it points at a moral wrong of sorts, a breaking of norms, and one that feels avoidable. After all, journalists typically have the right to take their bylines off of pieces, even if McClatchy appears to be quietly eliminating that right.

By McClatchy attempting to make this shift, it highlights the weakening state of the power dynamic between the newsroom and its employees. And AI is the justification.

broken-robot-hand.jpg
That robot hand is gonna hit its limit at some point.

A truism about AI: It’s often a scapegoat

Another headline that I stumbled upon around the same time I think points to a broader issue: Often, AI is just used as a reason to do something that employees would otherwise be uncomfortable with.

This week, Meta announced a plan to start tracking employees’ mouse and keyboard input, with the idea of building training data for its AI agents. See, it’s okay if we spy on you, because it’s for AI.

Let’s be clear, if Meta wanted to do this, it would just do it. It doesn’t need to attach AI as an excuse. But the addition makes it generally more palatable.

Likewise, if McClatchy wanted to have a bunch of inexperienced interns or non-journalists repackage content in haphazard, over-the-top ways, it could just do it. If it wanted to strip employees of the right to take their name off a story, it could just do it. But AI gives it enough of a sheen that it takes attention off the fact there’s nothing stopping them from just doing it because today is a day that ends in y.

And I think that’s ultimately the point I want to get at here. Employers are going to say a lot of things in the coming years and blame AI for doing those things. After all, it’s a great wrench for hammering in nails. But let’s not be silly: It’s also an excellent excuse to sweep a lot of other changes through, whether it’s layoffs or costing employees some of their taken-for-granted rights.

In Wizard of Oz parlance, don’t let the flashy visuals fool you: There’s a human behind the curtain, making the choices that could reshape your life and career.

Wrench-Free Links

So John Ternus is gonna be Apple’s new CEO. Good for him, it’s a well-deserved promotion and it could help make Apple a little less conservative with some of its decision-making. One thing hinted about in recent coverage was that the MacBook Neo was his baby, and its success proved to Tim Cook that he was leaving Apple in good hands. Sounds like a good first sign.

The new Beck single,Ride Lonesome,” is such a weird tune. It sounds like he intentionally went back to “The Golden Age,” the leadoff track of his classic breakup album Sea Change, changed a chord or two, and shipped it off to the label. He’s lucky that his music is so good that he can John Fogerty himself.

Shout-out to the new pasta sauce microphone manufacturer, Prego.

--

Find this one an interesting read? Share it with a pal! And back at it soon.

And thanks again to Setapp for sponsoring.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Spaced Repetition: Beginner Guide/FAQ

1 Share
Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories