1675 stories
·
2 followers

How to Teach Kids to Evaluate Information (Before AI Teaches Them Not To)

1 Share

A first-grader asks an AI chatbot why the sky is blue and takes the answer at face value. A high-schooler scrolls through a social media feed of takes on a current event without much framework for sorting the factual from the manufactured. Both scenes share the same pattern: a generation growing up on fluent, confident-sounding information, with no working practice for deciding what to trust. By the time the first-grader is old enough to vote, the pattern will have been reinforced for more than a decade.

Every information shift demands a new literacy. Twentieth-century kids learned to tell the news from the commercials, and to recognize a reporter’s byline as a different kind of authority than a pundit’s opinion. The internet brought a harder task, because anyone could publish anything, and a slick-looking website wasn’t automatically a trustworthy one. Social media layered an invisible filter on top, since the content reaching a kid had been sorted and shaped by systems optimizing for engagement long before anyone chose what to look at. AI is the latest turn of this arc, and it collapses the task further: a chatbot answer arrives confident and without citations, potentially with errors nested inside well-formed sentences and no visible trace of who produced it or how. So for a kid without evaluation skills, a news article and a viral post can land with roughly the same weight, and the cost of failing to tell them apart can have significant consequences as they get older.

Kids who grow up practicing evaluation skills enter adulthood able to meet whatever new claim shows up in front of them and hold it against a working understanding of how information gets made and whose interests it serves. They can disagree with the people around them because they’ve traced arguments back to their origins, and they’re better equipped to update their thinking when the evidence calls for it. What they then carry into adulthood is the ability to form their own opinions rather than inherit someone else’s. That ability shapes everything downstream of it, from the choices they make about their health and money to the votes they cast and the people they trust.


Card Catalog teaches information literacy for the AI age: how to evaluate what you’re reading and how to process what you find. Learn how to stay informed without the overwhelm. Join 20K+ readers here ↓


What we mean by information literacy

Library science has been developing evaluation skills for well over a century, and the field has a formal framework for teaching them. In 2016, the Association of College and Research Libraries published the Framework for Information Literacy for Higher Education. The framework replaced an earlier set of teaching standards from 2000 that had been organized around a checklist approach, where students learned to locate sources and evaluate them against fixed criteria.

The new framework moved toward a set of “threshold concepts.” A threshold concept is the kind of idea that reorganizes how someone thinks about a whole field once they grasp it. Learning about compound interest, for instance, permanently shifts how a person reads any decision involving money over time, in a way that wouldn’t have been available before the concept landed. The framework identified six such concepts in information literacy and called them “frames,” each describing a dimension of how information functions in the world. A kid who’s been taught the frames doesn’t just know the procedural steps of evaluating a source; she sees information differently, which is what carries her across whatever new tools and contexts she’ll meet over a lifetime.

The framework was written for college students and their librarians. But the frames it describes apply just as well to the information environments kids are navigating at home and in K-12 classrooms. One librarian practice sits upstream of the frames, shaping every information interaction that follows it: the reference interview.

The reference interview

At library reference desks, the question someone asks out loud is almost never the question they really need answered. A student might walk up and ask for “something about the French Revolution” when the material they need is something more specific, like a study of women’s political organizing in late-eighteenth-century Paris. Answering the literal question would send them in the wrong direction, so librarians developed a short protocol for surfacing the real question first. That protocol is called the reference interview.

The interview uses a handful of specific techniques:

  • Open-ended follow-ups that invite the patron to say more about the underlying project

  • Questions about what the patron already knows, and what they plan to do with the answer once they have it

  • Restating the question back to check that it’s been heard correctly

A competent reference interview can turn a vague question into a sharp one, and that sharper question is what makes the rest of the research succeed. Without it, hours can be lost following a poorly-framed question down paths that were never going to answer what the patron actually needed to know.

The same practice transfers to how kids interact with any information source, including school research and AI chatbots. Teaching kids to interview their own questions before consulting any source is one of the most useful habits information literacy can build. A kid who learns this practice young carries it into every research task they’ll do, and the small early work of clarifying a question saves substantial wasted effort on vague or misguided answers later.

In the home: This move matters most right before your kid types into a chatbot or search engine, since these tools produce confident-sounding answers to vague questions just as readily as to sharp ones. Next time you’re about to look something up together, pause before either of you starts the search. If your kid asks “what’s the best video game console,” try responding with “best for what kind of games?” The question that comes back is usually one with a more useful answer. With a younger child wondering why sharks are dangerous, asking “what are you trying to figure out about them?” can surface what they were really after. Doing this with your kid gives them a model they can draw on later, when they’re consulting a chatbot or search bar without anyone next to them.

In the classroom: Before any research assignment, pair students up and have them interview each other’s topics, with one student playing librarian and the other playing researcher. A short exchange of this kind reliably turns a vague prompt into better research questions. The habit transfers to how students interview their own questions when they’re working alone, which is where the exercise has its biggest effect across the course of a research project.

The six frames

The reference interview clarifies what’s actually being asked before any source gets consulted. The six frames that follow do the next layer of work, which is evaluating what those sources contain once the question itself is clear. Each frame names a different dimension of how information functions in the world, and each one comes with its own way of reading sources that holds up across formats, platforms, and tools.

1. Not every expert is an expert on everything.

Framework name: Authority Is Constructed and Contextual

Authority isn’t a permanent quality a person has. Specific communities grant authority for specific claims, and that authority rarely travels smoothly across domains. A cardiologist has real authority on heart disease, but that authority doesn't extend to questions about mental health treatment, even though she holds a medical degree. A celebrity endorsing a wellness product brings name recognition to the product, but name recognition isn’t the same as knowing whether the product actually works.

This frame replaces the flat question of whether a source is trustworthy with the sharper question of what a source is trustworthy on. AI output complicates the picture further, because a chatbot answer carries the tonal texture of expertise without coming from any particular expert. Kids who learn this frame young develop the habit of asking what is this person trained in, and does this claim fall inside that area?

In the home: When your kid says “my teacher said...” or “the doctor said...” take the opening to ask a light contrast question. You can say “Does your dentist know a lot about teeth? What about building rockets?” The contrast makes the point without lecturing: expertise is specific to the area someone trained in. Over time, your kid can begin to ask on their own whether a source is speaking from their area of training or reaching beyond it.

In the classroom: Give students a short piece of writing where a credentialed person makes several different kinds of claims. An op-ed by a famous doctor on a political issue works well, as does a business executive writing about a scientific topic. Have students mark which claims fall inside the writer’s training and which reach outside it. The exercise demonstrates that credentials don’t transfer automatically across domains, and the question of where someone’s authority ends becomes one students can ask of any source going forward.

2. How information gets made shapes what it can tell us.

Framework name: Information Creation as a Process

A peer-reviewed paper and an AI answer can both claim to be sources of knowledge, but they were produced by radically different processes; that process shapes what kind of knowledge each one can hold. A peer-reviewed paper takes months or years to produce, and it passes through multiple rounds of expert review before it gets published. A chatbot answer takes seconds to generate, with no human editor involved at the moment of writing. Both arrive at a reader looking like information, but only one was shaped by a process designed to catch errors.

This frame teaches kids to ask how a piece of information was produced before deciding how much to trust it. A peer-reviewed article has been through checks that catch errors, so a kid can lean on it more heavily. A chatbot answer or a social media post has been through no checks at all, so anything that matters in it should be verified against a source that did go through verification checks. The habit we’re learning is how much to trust a source in proportion to how carefully it was made.

In the home: When your kid quotes something to you, take the moment to ask where they got it. “Did you read that in a book, hear it in class, see it on YouTube, or get it from AI?” The answer decides how much work the claim still needs. A fact from a textbook has been through some kind of review, which catches basic errors even though it doesn’t catch things like motivated omissions or framing choices. A chatbot answer hasn’t been through any review at all, so anything important needs a second verifiable source before the family treats it as true.

In the classroom: Pick a topic students are already researching, and have the class find two kinds of sources on it: an AI answer and an encyclopedia entry or textbook chapter on the same question. Ask students to infer what they can about how each source was made, based on the kind of source it is. From there, they can calibrate their trust: lean on the more carefully made source, and verify anything important from the less carefully made one.

3. Behind every piece of information, somebody wants something.

Framework name: Information Has Value

Anything created for an audience came into existence for a reason, and that reason shapes the content in ways that aren’t always obvious from the surface. Advertisements are built around the goal of converting viewers into customers, which determines the kinds of claims they make and the kinds of evidence they offer. Political messaging follows the same logic for a different end, shaping content around the goal of moving an audience toward a particular position. AI has added a new dimension of this issue, since many of the major models were trained on copyrighted material without compensation or consent, and what those models now produce serves the commercial interests of the companies running them.

The frame teaches kids to ask what the maker was after when they made the piece. The answer changes what shows up in the final content and what gets left out, which means noticing the maker’s motivation is where most of the evaluation work happens. Kids who grow up reading information this way develop a default question they can ask of anything: who benefits when I take this at face value? Motivation is one of several dimensions of this ACRL frame, alongside questions of attribution, access, and whose voices get heard; it’s the dimension most directly applicable to the content kids encounter in everyday life.

In the home: When your kid wants to watch a free video or play a free game, ask them “who do you think is paying for this to exist?” Whatever they answer, the question opens a conversation you can come back to: free content always has a cost attached, and the cost just gets paid in something other than money. For a free video, your kid is paying with attention that the platform sells to advertisers. For a free game, they’re paying with time and data, and often with temptation to buy upgrades inside the game. The same approach surfaces motivations beyond money in any other kind of content. Naming this out loud, repeatedly, builds the question your kid will start asking on their own about anything that reaches them: who’s getting something out of me reading or watching this?

In the classroom: Pick a piece of content the class has seen, and spend twenty minutes tracing what the maker wanted from the audience. The incentive might be commercial, where an advertiser or sponsor is working to drive sales, or it might be persuasive, where a campaign or organization is working to shift opinion on an issue. The exercise puts the question of motivation in front of students directly, so they can practice reading content for the purpose driving it rather than only for what it says on the surface.

4. Good questions get sharper as you ask them.

Framework name: Research as Inquiry

Research doesn’t happen in a single act of looking something up. A question goes in, partial answers come back, the question gets refined based on what those partial answers reveal, and the cycle repeats. Meaningful questions rarely have clean single answers, and researching well includes the capacity to stay with a question while it sharpens rather than collapsing it too early into a premature conclusion.

AI chat trains the opposite reflex: a chatbot offers a single finished answer to a single question, and the interaction feels complete as soon as it ends. Kids who only practice that pattern miss the underlying skill of refining questions as they learn more. The skill is what separates researching a topic from simply collecting answers.

In the home: When your kid asks you a question, try bouncing it back first: “What do you already think about that?” or “How could we find out together?” These simple questions invite your kid into the thinking process rather than handing them a finished answer. Over time, children can develop their own opinions and their own research instincts, instead of waiting for the adult in the room to tell them what’s true.

In the classroom: For any research assignment, build in two checkpoints where students rewrite their research question based on what they’ve learned so far. By the end of the project, students can compare their final question to the one they started with and see how it changed. The point of the exercise is to make the iterative nature of research visible, so students experience research as something that reshapes the question itself (and not just a hunt for answers to a fixed one).


Librarians don’t just help you find information. We help you know what to do with it once you have it. Card Catalog applies that same expertise to the age of AI and information overload. Join 20K+ readers here ↓


5. No serious idea stands alone.

Framework name: Scholarship as Conversation

Substantive ideas rarely get developed by people working in solitude. They emerge from groups of people thinking and writing in response to each other over time, with each contribution shaped by what came before it and shaping what comes after. Making sense of something means recognizing the larger argument it's a part of, including the earlier work it's reacting to and the work it will provoke in turn.

The frame shows up vividly in academic writing, where literature reviews and citations make the conversation visible on the page. The same pattern operates in journalism that builds on earlier reporting and in political writing that engages older arguments. AI-generated summaries do something more concerning, because they compress many conversations into a single confident-sounding voice that erases the evidence a conversation ever existed.

In the home: When you’re reading a book or watching a show with your kid, point out the moments where the book or show is reacting to something else. You can say things like This story is making fun of those old fairy tales we read last year or “This creator made a video to argue with what that other person posted last week.” Comments like these position the book or show as part of a larger dialogue rather than as a standalone pronouncement, introducing the question a kid can carry into anything they read or watch: what is this responding to or building on?

In the classroom: Take any text students are already reading and teach them to look at its bibliography or works cited section. The simplest question to ask is who the author is building on or arguing with. Walk the class through a single citation chain: pick one reference in the text, and look at that source together to see what it says and how the original text engaged with it. Students learn that every serious piece of writing is part of a longer, larger conversation, which can change how they read everything afterward.

6. Finding good information takes more than one search.

Framework name: Searching as Strategic Exploration

Skilled searching looks nothing like a single query producing a single result. It’s a process of trying something, seeing what comes back, adjusting based on what the results reveal about the topic and the tools, and trying again with sharper terms. The craft lives in knowing when to try again and what to vary when the first search doesn’t deliver.

AI chat has flattened this craft for many users, because a chatbot offers the surface of an answer on the first try. The pattern trains our reflexes away from iteration. Kids who grow up inside that reflex miss the underlying skill, which is the ability to move strategically through an information landscape when a first attempt doesn’t give them what they need.

In the home: When your kid watches you look something up, show them that you try more than one search. Say out loud when a search doesn’t give you what you need, and try a different approach. You can narrate it directly, with something like “Hmm, that didn’t work, let me try different words,” or “this site doesn’t seem reliable, let me look somewhere else.” The modeling teaches your kid that search is an iterative process rather than a single click.

In the classroom: Put several search tools in front of students and have them run the same question through each one. A library database and an AI chatbot will return noticeably different results for the same question, and comparing the differences shows students how information is structured in ways that aren’t visible from any single tool. The exercise also makes clear that no search tool gives a complete picture, because each one is designed to prioritize certain results over others. The ability to turn to multiple sources, or refine their search process over time, is the foundation to students’ verification strategies as they get older.

The arc over time

The information environment kids are growing up in now will keep shifting, and the shifts will inevitably keep coming. What makes the ACRL framework durable across these shifts is that the frames describe dimensions of information that stay stable when the tools change. Authority is always specific to a domain. Information is always produced through some process, and the process shapes what the output can carry. Every piece of information moves through systems of incentive, and every serious idea takes part in a larger conversation, whether or not the tool it arrives through makes the conversation visible.

Kids raised on these habits grow into adults who can encounter new information and truly see what’s in front of them. They can recognize when an author is reaching beyond their expertise, when incentives are shaping what gets said, when an article is in conversation with other articles, and when the first search result isn’t the last word. They can grow into voters and citizens who think the way they want to think rather than the way someone else wanted them to. This capacity gets built in the small conversations of home and classroom, long before the first vote gets cast or the first big decision needs making.

What these skills add up to has a name: information resilience, the capacity to meet any claim, from any source, without being knocked off course by what arrives. A resilient reader can stay with a question while it sharpens, and can hold her ground when the people around her are settling on conclusions faster than the evidence warrants. She has an internal sense for what’s worth a closer look, built up over years of practice. That kind of resilience is what parents and teachers can give the kids growing up now, and it will keep working long after any specific tool or platform has been replaced.


The free essays are the foundation. The paid tier is the applied toolkit: biweekly AI briefings, monthly subscriber-driven research, and quarterly guides that give you real skills you can use immediately, plus a growing framework library (and classes coming soon). Upgrade to paid if you want the full Card Catalog. Thank you for being here!


Have you read the Founding Member Report: The State of AI yet?
A comprehensive guide for information navigators who want to understand where AI is actually heading and what it means for how we find, evaluate, and use information in 2026.
Find out more here.

Share Card Catalog

Read the whole story
mrmarchant
11 minutes ago
reply
Share this story
Delete

The Art and Science of Customer Education

1 Share

Introduction

I’ve spent the better part of the last decade studying how people actually learn. The question I keep coming back to is this: when does an experience merely feel educational, and when does it actually help someone understand enough to do something differently?

The difference is easy to miss because good experience design is persuasive. An experience can feel thoughtful, elevated, and full of intention. It can spark curiosity. It can make you want to belong. And still, it can fail at the thing that matters most: helping you understand what to do next.

Thanks for reading! Subscribe for free to receive new posts and support my work.

That was the question I kept thinking about when I visited Nespresso’s flagship in Flatiron. The store is positioned as a destination for discovery: tastings, masterclasses, theatrical design, staff who act more like guides than salespeople. I walked in expecting something close to an interactive museum for coffee. Unfortunately, I walked out having learned very little, bought nothing, and feeling more loyal to the machine I already owned.

Here is the thing, though. I was exactly the kind of customer this store should convert because:

  1. I drink coffee every day

  2. I care about convenience more than craft

  3. I’m a sucker for good design

  4. My coffee machine had recently broken

Even more, Nespresso’s own strategic challenge, documented in an IMD case study, is attracting younger consumers to an aging customer base. I was standing in their store, credit card effectively out, waiting to be taught why their product was the right one. The flagship had one job.

What the store got right

The space itself is stunning: calm, beautifully lit, and designed with a care you can feel the second you walk in. It reminded me of the Harry Potter store in New York: that same sense that someone had designed not just a retail environment, but a world. And that instinct is sound. Falk and Dierking’s research on museum learning shows that physical context directly shapes what people notice, engage with, and retain. A well-designed space creates the kind of openness that makes people willing to explore rather than retreat.

Nespresso carries that instinct further than most. Within a few feet of the entrance, there is a self-serve coffee station. A friendly staff member greeted me, explained the blend, and demonstrated the standard machine. Before anyone tried to sell me anything, I was already holding a cup and using the product. That is smart design. It is also, whether Nespresso knows the term or not, a first step in building self-efficacy: give someone an early experience of success before asking them to do anything hard.

This is the art of customer education. It makes a person curious enough to step closer and creates the feeling that this product might be for someone like me.

But art only opens the door, and what happened next showed me exactly why that isn’t enough.

The latte machine

Right next to the standard machine was a more advanced one for lattes. Now, I love lattes, and the thought of making one at home every morning on a beautifully designed machine was exactly the aspiration Nespresso was hoping to sell. I had never used a latte machine before, but in that moment, I wanted to learn.

All images are from Google Maps because...I forgot to take pictures

There was no one to ask. The staff member had moved on, and nothing else picked up the gap. No signage. No visual guide. No “start here” prompt. I pressed a few buttons, could not figure it out, and defaulted to the simpler machine.

In learning science, there is a concept called a teachable moment: a narrow window where someone is aware of a gap in their own understanding and motivated to close it. The latte machine was a textbook case. I had already cleared the hardest threshold. The store did not need to teach me everything about milk frothing. It just needed to help me across one small bridge, from curiosity to a first successful attempt. A laminated three-step card would have worked. A QR code to a thirty-second video would have worked. Instead, the window closed, I shrugged, moved on.

That is the thing about teachable moments: they do not wait. The gap between “I want to try this” and “OK, I guess I cannot” is seconds, not minutes.

The aroma station

The latte machine was a failure of support at the point of need. What happened next was a different kind of failure: a missing pathway from discovery to action.

Near the machines was a perfume-style aroma station where you could smell different coffee profiles by pressing a pump. It was clever, theatrical, and I loved it. I found a profile that felt like exactly my taste.

I looked up to figure out what to do next. There was a wall of capsules at the very back of the store, but nothing connecting the scent I had just discovered to any specific product on that wall. No sample pack. No card with the blend name. No small sign saying: if you liked this, start here. I was standing there holding a preference I had learned thirty seconds ago, with no way to act on it.

The store helped me discover a taste but gave me no pathway from that discovery to a decision. Desire without understanding or confidence is just a nice memory.

What this visit taught me

These two moments are different failures, but they point to the same structural gap. In both cases, the store created a compelling experience and then quietly asked the customer to bridge the hard part alone.

That gap is what clarified something I have been thinking about for a while. Customer education is not one job. It is four, and they are sequential.

First, experience. The customer interacts with the product in a low-stakes, inviting way. Nespresso does this beautifully.

Second, understanding. When the customer hits something unfamiliar, support appears at the moment of need, in the format that fits the moment. This is where the latte machine failed.

Third, confidence. The customer starts to feel, in some small but real way: I can use this. I can tell the difference. I can make a good choice. Bandura called this self-efficacy, and it is built through successful attempts, not through exposure. The aroma station created exposure without any path to a successful attempt.

Fourth, commitment. Once the customer has enough confidence, the path from interest to decision should be clear.

You cannot reliably skip from experience to commitment without building understanding and confidence in between. But that is exactly what most customer education tries to do. It creates a compelling first encounter and then hopes the customer will figure out the rest.

Why this matters beyond coffee

That pattern shows up constantly in digital products, and especially in AI. A company creates a magical first output: an onboarding flow that feels polished or a demo that makes the system look effortless. The user sees the promise immediately.

But what comes next remains unclear: how to repeat the outcome, how to make a good choice, how to trust the product enough to change their actual workflow. The experience is impressive. But, the user is still not capable.

That is the gap between the art and the science of customer education. The art creates openness. It gets someone to step closer. But the science is what carries them through: the support and sequencing that turn interest into capability.

Nespresso invested deeply in the art and left the science almost entirely to chance.

The companies that get this right will not be the ones with the most polished demos or the prettiest flagship stores. They will be the ones that understand a harder truth: delight opens the door, but capability is what closes the sale.

Akanksha is a learning engineer who has spent the last decade working across classrooms, EdTech, and AI-enabled product teams. She writes AWU Field Notes about learning science, customer education, and what it actually takes to help people go from confused to capable.

Thanks for reading! Subscribe for free to receive new posts and support my work.

Read the whole story
mrmarchant
21 minutes ago
reply
Share this story
Delete

The Accursèd Alphabetical Clock . “This clock...

1 Share

The Accursèd Alphabetical Clock. “This clock displays the current time alphabetically.” Totally deranged…I love it.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Haversine Distance

1 Share

If you have two GPS coordinates and want the distance between them, it is tempting to treat latitude and longitude like ordinary x/y coordinates and use the Euclidean distance formula.

That shortcut breaks down once the Earth’s curvature matters. Latitude and longitude are positions on the surface of an approximately spherical planet, so the shortest surface path between two points is not a straight line on a flat plane. It is an arc along the sphere.

The haversine formula is a practical way to compute that arc length. Given two lat/lon pairs, it tells you the great-circle distance between them: the shortest distance along the surface of a sphere.

What the haversine formula measures

Imagine two points on the Earth:

  • Point 1: latitude \(\phi_1\), longitude \(\lambda_1\)
  • Point 2: latitude \(\phi_2\), longitude \(\lambda_2\)

The shortest path constrained to the surface of a sphere lies on a great circle, which is any circle whose center matches the sphere’s center. The equator is a great circle. So is the meridian plane passing through London and New York.

Try dragging the points in the demo below. It shows how the surface arc and the central angle change together.

If we can find the central angle \(\theta\) between the two points, measured from the center of the Earth, then the surface distance is just arc length:

\[d = r\theta\]

where \(r\) is the radius of the sphere and \(\theta\) is measured in radians.

This is the standard arc length formula. A full circle has circumference \(2\pi r\) and angle \(2\pi\) radians, so an angle of \(\theta\) covers the fraction \(\theta / 2\pi\) of the circle. The corresponding fraction of the circumference is:

\[d = \frac{\theta}{2\pi} \cdot 2\pi r = r\theta\]

So the problem reduces to: given two lat/lon pairs, how do we compute \(\theta\)?

From spherical geometry to haversine

The spherical law of cosines gives the central angle between two points on a sphere:

\[\cos(\theta) = \sin(\phi_1)\sin(\phi_2) + \cos(\phi_1)\cos(\phi_2)\cos(\lambda_2 - \lambda_1)\]

This is correct, but in practice it can lose precision for very small distances, where \(\theta\) is close to zero and \(\cos(\theta)\) is close to 1.

The haversine form rewrites the same relationship in a numerically friendlier way.

The haversine function is defined as:

\[\operatorname{hav}(x) = \sin^2\left(\frac{x}{2}\right)\]

To get from the spherical law of cosines to the haversine form, we use the cosine difference identity and the half-angle relationships below:

\[\cos(a - b) = \cos(a)\cos(b) + \sin(a)\sin(b)\] \[\cos(x) = 1 - 2\sin^2\left(\frac{x}{2}\right)\] \[\operatorname{hav}(x) = \sin^2\left(\frac{x}{2}\right) = \frac{1 - \cos(x)}{2}\]

From the last identity, we can rearrange to get:

\[1 - \cos(x) = 2\operatorname{hav}(x)\]

Let:

\[\Delta \phi = \phi_2 - \phi_1\] \[\Delta \lambda = \lambda_2 - \lambda_1\]

First apply the cosine difference identity to \(\Delta \phi\):

\[\cos(\Delta \phi) = \cos(\phi_1)\cos(\phi_2) + \sin(\phi_1)\sin(\phi_2)\]

Rewrite the spherical law of cosines using \(\Delta \lambda\):

\[\cos(\theta) = \sin(\phi_1)\sin(\phi_2) + \cos(\phi_1)\cos(\phi_2)\cos(\Delta \lambda)\]

Now solve the previous identity for \(\sin(\phi_1)\sin(\phi_2)\):

\[\sin(\phi_1)\sin(\phi_2) = \cos(\Delta \phi) - \cos(\phi_1)\cos(\phi_2)\]

Substitute that into the spherical law of cosines:

\[\cos(\theta) = \cos(\Delta \phi) - \cos(\phi_1)\cos(\phi_2) + \cos(\phi_1)\cos(\phi_2)\cos(\Delta \lambda)\]

Now factor out \(\cos(\phi_1)\cos(\phi_2)\):

\[\cos(\theta) = \cos(\Delta \phi) - \cos(\phi_1)\cos(\phi_2)\left(1 - \cos(\Delta \lambda)\right)\]

Next convert everything into haversines by using \(1 - \cos(x) = 2\operatorname{hav}(x)\):

\[1 - \cos(\theta) = 1 - \cos(\Delta \phi) + \cos(\phi_1)\cos(\phi_2)\left(1 - \cos(\Delta \lambda)\right)\] \[2\operatorname{hav}(\theta) = 2\operatorname{hav}(\Delta \phi) + 2\cos(\phi_1)\cos(\phi_2)\operatorname{hav}(\Delta \lambda)\]

Divide both sides by 2:

\[\operatorname{hav}(\theta) = \operatorname{hav}(\phi_2 - \phi_1) + \cos(\phi_1)\cos(\phi_2)\operatorname{hav}(\lambda_2 - \lambda_1)\]

Now replace each haversine term with \(\operatorname{hav}(x) = \sin^2\left(\frac{x}{2}\right)\):

\[\operatorname{hav}(\theta) = \sin^2\left(\frac{\theta}{2}\right)\] \[\operatorname{hav}(\Delta \phi) = \sin^2\left(\frac{\Delta \phi}{2}\right)\] \[\operatorname{hav}(\Delta \lambda) = \sin^2\left(\frac{\Delta \lambda}{2}\right)\]

So:

\[\sin^2\left(\frac{\theta}{2}\right) = \sin^2\left(\frac{\Delta \phi}{2}\right) + \cos(\phi_1)\cos(\phi_2)\sin^2\left(\frac{\Delta \lambda}{2}\right)\]

In code, we usually name the right-hand side \(a\):

\[a = \sin^2\left(\frac{\Delta \phi}{2}\right) + \cos(\phi_1)\cos(\phi_2)\sin^2\left(\frac{\Delta \lambda}{2}\right)\]

Then:

\[\sin^2\left(\frac{\theta}{2}\right) = a\]

Since the central angle on a sphere satisfies \(0 \le \theta \le \pi\), we have \(0 \le \theta/2 \le \pi/2\), so the sine is nonnegative. Taking square roots gives:

\[\sin\left(\frac{\theta}{2}\right) = \sqrt{a}\]

Apply inverse sine to recover the central angle:

\[\theta = 2\arcsin(\sqrt{a})\]

Many implementations use the equivalent and numerically stable form:

\[c = 2\operatorname{atan2}(\sqrt{a}, \sqrt{1-a})\] \[d = rc\]

where \(c = \theta\) is the central angle in radians.

Why half-angles appear

The half-angle terms are not arbitrary. They come from the identity:

\[\operatorname{hav}(x) = \frac{1 - \cos(x)}{2}\]

This matters because for small angles, directly subtracting from 1 can amplify floating-point error. Writing the expression in terms of \(\sin^2\left(\frac{x}{2}\right)\) behaves better numerically.

That is one reason the haversine formula became the standard practical choice for many navigation and mapping problems.

Coordinate units matter

Latitude and longitude are usually stored in degrees, but JavaScript, C, Python, and most math libraries expect trigonometric inputs in radians.

So before applying the formula:

\[\text{radians} = \text{degrees} \cdot \frac{\pi}{180}\]

If you forget this conversion, the result will be wrong by a large factor.

A practical JavaScript implementation

Here is a compact implementation:

const EARTH_RADIUS_METERS = 6371008.8;

function toRadians(degrees) {
  return degrees * Math.PI / 180;
}

function haversineDistance(
  lat1,
  lon1,
  lat2,
  lon2,
  radius = EARTH_RADIUS_METERS
) {
  const phi1 = toRadians(lat1);
  const phi2 = toRadians(lat2);
  const dPhi = toRadians(lat2 - lat1);
  const dLambda = toRadians(lon2 - lon1);

  const sinHalfDPhi = Math.sin(dPhi / 2);
  const sinHalfDLambda = Math.sin(dLambda / 2);

  const a =
    sinHalfDPhi * sinHalfDPhi +
    Math.cos(phi1) * Math.cos(phi2) *
    sinHalfDLambda * sinHalfDLambda;

  const c = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1 - a));

  return radius * c;
}

This returns the distance in meters when using the default Earth radius \(r = 6371008.8\) meters.

Example:

const london = { lat: 51.5074, lon: -0.1278 };
const newYork = { lat: 40.7128, lon: -74.0060 };

const distanceMeters = haversineDistance(
  london.lat,
  london.lon,
  newYork.lat,
  newYork.lon
);

console.log(distanceMeters / 1000); // about 5570 km

Interpreting the terms

The formula has a clean geometric interpretation:

  • \(\sin^2\left(\frac{\Delta \phi}{2}\right)\) captures north-south separation
  • \(\sin^2\left(\frac{\Delta \lambda}{2}\right)\) captures east-west separation
  • \(\cos(\phi_1)\cos(\phi_2)\) scales longitude separation by latitude

That last term is important. Lines of longitude converge toward the poles, so one degree of longitude does not represent a fixed physical distance everywhere on Earth. Near the equator it spans a large distance. Near the poles it shrinks dramatically.

Why not use plain Euclidean distance?

Suppose two points differ by 1 degree of longitude.

At the equator, that corresponds to roughly 111 km. Near latitude of 60 degrees, it is roughly half of that. Near the poles, it approaches zero. A flat 2D distance formula ignores this geometry entirely, so errors become larger as distances grow or as you move away from the equator.

Projecting to a local plane

For small regions, a common approach is to approximate the Earth’s surface as flat and work in a local Cartesian coordinate system. The idea is to convert latitude and longitude offsets into metric distances at the reference latitude of the region.

Given a reference point \((\phi_0, \lambda_0)\) and a nearby point \((\phi, \lambda)\), the offsets in meters are approximately:

\[\Delta x = (\lambda - \lambda_0) \cdot \cos(\phi_0) \cdot M\] \[\Delta y = (\phi - \phi_0) \cdot M\]

where \(M \approx 111{,}320\) meters per degree (one degree of arc along a great circle). The \(\cos(\phi_0)\) factor accounts for the fact that longitude lines converge toward the poles: the east-west distance per degree of longitude shrinks as latitude increases.

The Euclidean distance in the local plane is then:

\[d \approx \sqrt{\Delta x^2 + \Delta y^2}\]

This approximation is called the equirectangular projection (or plate carree projection). It is fast and simple, but the error grows with distance from the reference point and with latitude. The flat-earth assumption breaks down once the angular separation between the points is large enough that curvature matters.

As a rough guideline:

Separation Typical equirectangular error
< 10 km < 0.01 %
~ 100 km ~ 0.3 %
~ 1000 km ~ 3 % or more

Beyond a few tens of kilometers, or near the poles where the \(\cos(\phi_0)\) factor changes rapidly over the region, haversine is the safer default.

Accuracy and limitations

The haversine formula assumes the Earth is a perfect sphere. It is not. The real Earth is better modeled as an oblate spheroid, slightly flattened at the poles. The polar radius is about 6357 km and the equatorial radius is about 6378 km, a difference of roughly 21 km (0.3 %).

Haversine uses a mean spherical radius of approximately 6371 km. The mismatch between sphere and spheroid introduces a systematic error that varies with the direction of travel.

The worst-case error of haversine against the WGS84 ellipsoid is roughly 0.5 %, which corresponds to about 5 meters per kilometer of distance. The error is largest for long paths that run diagonally between equatorial and polar latitudes; it is near zero for paths along the equator or along a meridian, because those paths happen to lie on circles of the spheroid.

As a practical table:

Distance Max haversine error (approx.)
1 km ~ 5 m
10 km ~ 50 m
100 km ~ 500 m
1000 km ~ 5 km

These are worst-case estimates. Typical real-world paths have errors closer to half of these figures.

For comparison, the equirectangular approximation over the same 1000 km would accumulate errors of 30 km or more. Haversine is therefore a large improvement over flat-Earth approximations, and accurate enough for most practical navigation and mapping work.

Where haversine is not enough:

  • High-precision surveying, cadastral mapping, or scientific geodesy require ellipsoidal methods such as Vincenty’s formulae or Karney’s geodesic algorithm (used in GeographicLib).
  • For routing over roads, the dominant error usually comes from the fact that roads do not follow great-circle arcs, not from the spherical assumption.

So the key question is not “Is haversine perfect?” but “Is spherical great-circle distance the right approximation for this system?”

Numerical stability notes

The atan2 form is preferred:

\[c = 2\operatorname{atan2}(\sqrt{a}, \sqrt{1-a})\]

rather than:

\[c = 2\arcsin(\sqrt{a})\]

Both are mathematically equivalent, but atan2 tends to behave better near the edges of floating-point precision.

Also note that due to rounding, a can occasionally drift slightly above 1 for nearly antipodal points. In defensive production code, it is reasonable to clamp:

const safeA = Math.min(1, Math.max(0, a));

before evaluating the final angle.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

How Engineers Kick-Started the Scientific Method

1 Share


In 1627, a year after the death of the philosopher and statesman Francis Bacon, a short, evocative tale of his was published. The New Atlantis describes how a ship blown off course arrives at an unknown island called Bensalem. At its heart stands Salomon’s House, an institution devoted to “the knowledge of causes, and secret motions of things” and to “the effecting of all things possible.” The novel captured Bacon’s vision of a science built on skepticism and empiricism and his belief that understanding and creating were one and the same pursuit.

No mere scholar’s study filled with curiosities, Salomon’s House had deep-sunk caves for refrigeration, towering structures for astronomy, sound-houses for acoustics, engine-houses, and optical perspective-houses. Its inhabitants bore titles that still sound futuristic: Merchants of Light, Pioneers, Compilers, and Interpreters of Nature.

Engraved title page of \u201cThe Advancement and Proficience of Learning\u201d with ship and globes Engraved title page of The Advancement and Proficience of LearningPublic Domain

Bacon didn’t conjure his story from nothing. Engineers he likely had met or observed firsthand gave him reason to believe such an institution could actually exist. Two in particular stand out: the Dutch engineer Cornelis Drebbel and the French engineer Salomon de Caus. Their bold creations suggested that disciplined making and testing could transform what we know.

Engineers show the way

Drebbel came to England around 1604 at the invitation of King James I. His audacious inventions quickly drew notice. By the early 1620s, he unveiled a contraption that bordered on fantasy: a boat that could dive beneath the Thames and resurface hours later, ferrying passengers from Westminster to Greenwich. Contemporary descriptions mention tubes reaching the surface to supply air, while later accounts claim Drebbel had found chemical means to replenish it. He refined the underwater craft through iterative builds, each informed by test dives and adjustments. His other creations included a perpetual-motion device driven by heat and air-pressure changes, a mercury regulator for egg incubation, and advanced microscopes.

De Caus, who arrived in England around 1611, created ingenious fountains that transformed royal gardens into animated spectacles. Visitors marveled as statues moved and birds sang in water-driven automatons, while hidden pipes and pumps powered elaborate fountains and mythic scenes. In 1615, de Caus published The Reasons for Moving Forces, an illustrated manual on water- and air-driven devices like spouts, hydraulic organs, and mechanical figures. What set him apart was scale and spectacle: He pressed ancient physical principles into the service of courtly theater.

Drebbel’s airtight submersibles and methodical trials echo in the motion studies and environmental chambers of Salomon’s House. De Caus’s melodic fountains and hidden mechanisms parallel its acoustic trials and optical illusions. From such hands-on workshops, Bacon drew the lesson that trustworthy knowledge comes from working within material constraints, through gritty making and testing. On the island of Bensalem, he imagines an entire society organized around it.

Beyond inspiring Bacon’s fiction, figures like Drebbel and de Caus honed his emerging philosophy. In 1620, Bacon published Novum Organum, which critiqued traditional philosophical methods and advocated a fresh way to investigate nature. He pointed to printing, gunpowder, and the compass as practical inventions that had transformed the world far more than abstract debates ever could. Nature reveals its secrets, Bacon argued, when probed through ingenious tools and stringent tests. Novum Organum laid out the rationale, while New Atlantis gave it a vivid setting.

A final legacy to science

Engraved title page of Bacon\u2019s *Novum Organum* with ships between two pillars Engraved title page of Bacon’s Novum OrganumPublic Domain

That devotion to inquiry followed Bacon to the roadside one day in March 1626. In a biting late-winter chill, he halted his carriage for an impromptu trial. He bought a hen and helped pack its gutted body with fresh snow to test whether freezing alone could prevent decay. Unfortunately, the cold seeped through Bacon’s own body, and within weeks pneumonia claimed him. Bacon’s life ended with an experiment—and set in motion a larger one. In 1660, a group of London thinkers hailed Bacon as their inspiration in founding the Royal Society. Their motto, Nullius in verba (“take no one’s word for it”), committed them to evidence over authority, and their ambition was nothing less than to create a Salomon’s House for England.

The Royal Society and its successors realized fragments of Bacon’s dream, institutionalizing experimental inquiry. Over the following centuries, though, a distorting story took root: Scientists discover nature’s truths, and the rest is just engineering. Nineteenth-century “men of science” pressed for greater recognition and invented the title of “scientist,” creating a new professional hierarchy. Across the Atlantic, U.S. engineers adopted the rigorous science-based curricula of French and German technical schools and recast engineering as “applied science” to gain institutional legitimacy.

We still call engineering “applied science,” a label that retrofits and reverses history. Alongside it stands “technology,” a catchall word that obscures as much as it describes. And we speak of “development” as if ideas cascade neatly from theory to practice. But creation and comprehension have been partners from the start. Yes, theory does equip engineers with tools to push for further insights. But knowing often follows making, arising from things that someone made work.

Bacon’s imaginary academy offered only fleeting glimpses of its inventions and methods. Yet he had seen the real thing: engineers like Drebbel and de Caus who tested, erred, iterated, and pushed their contraptions past the edge of known theory. From his observations of those muddy, noisy endeavors, Bacon forged his blueprint for organized inquiry. Later generations of scientists would reduce Bacon’s ideas to the clean, orderly “scientific method.” But in the process, they lost sight of its inventive roots.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

How the Wealthy Game Disability Laws for Ivy League Gains

1 Share

Student accommodations in elite lecture halls are reshaping the ways that privilege manifests in America. At Stanford, 38% of students are registered as disabled. At Harvard and Brown, more than 20% of students are registered as disabled. Better testing and comfort talking about disability has helped spur this rise in ways that reflect social and medical progress, but this statistical anomaly is not just about medicine, it reflects a two-tiered system where we “accommodate” the elite but often abandon the poor and those who need our support most.

Jeremy here 👋 Thanks for reading this article and supporting American Inequality. Join our great community by becoming a free susbcriber

The proliferation in accommodation plans, known as 504 plans (after a section of federal law that prohibits discrimination based on disability), has made even the most academically rigorous universities more welcoming to students with disabilities. Those 504 accommodations include extended test time, note-taking services, and special testing rooms, but many students have reported that it also has given them priority housing, ability to live in a single, and meal plan benefits. The most common diagnoses in these 504 plans are ADHD, neurodivergence, and mental health conditions including severe anxiety.

There’s a real tension at the heart of this data - on one hand, critics say that students are gaming the system. No formal evaluation is required for a 504 accommodation, only a documented physical or mental impairment from a doctor’s note. Parents submit a request to a school administrator and most schools get to decide what types of materials they request, according to the Department of Education. On the other hand, advocates say that the rise is a sign of progress. They say that the underlying mental and physical disabilities were always there for students, but only now are medical professionals, schools, and parents doing a better job of actually diagnosing the disabilities.

At the University of Chicago, the number has more than tripled over the past eight years; at UC Berkeley, it has nearly quintupled over the past 15 years. Only 14% of Americans below 35 have a disability, but at schools like Amherst, 34% of students report a disability.

Who Really Benefits from the ADA?

What is clear is that wealthy families tend to benefit most from this system. The rates of students claiming disabilities seems to be far higher in places with higher median household incomes. In Weston, Connecticut, where the median household income is $220,000 nearly 1 in 5 students claims a disability. Just 30 minutes north in Danbury, Connecticut where the median household income is $83,000, the rate is 8-times lower.

Still, the system remains open to abuse by those who know how to game it. The Varsity Blues college-admissions scandal showed that there are wealthy parents who are willing to pay unscrupulous doctors to provide disability diagnoses to their non-disabled children, securing them extra time on standardized tests. Studies have found that students exaggerate symptoms when they go in for these tests, making it hard for doctors to evaluate the children.

The wealthiest students across America claim the most disabilities.

Students in every ZIP code are dealing with anxiety, stress and depression as academic competition grows ever more cutthroat. But the sharp disparity in accommodations raises the question of whether families in moneyed communities are taking advantage of the system, or whether they simply have the means to address a problem that less affluent families cannot. While experts say that known cases of outright fraud are rare, wealthy parents who want to give their children every advantage in life are spending tens of thousands of dollars on tests and finding doctors who may offer the neuropsychological diagnosis they think their child needs.

The Cleveland Metropolitan School District, one of the poorest in the country, had a 504 rate of less than 1 percent. One mother in Montgomery County, Maryland transferred her son, who has A.D.H.D. and a reading disability, from a public high school to a private one that charges $45,000 per year in tuition. When the boy arrived, the staff at the new school told his mother about ACT accommodations she had not known about. Her son scored a 33 after taking the exam over multiple days, and is now considering applying to Ivy League schools.

Buying Time and Abandoning the Meritocracy

Many companies are now making millions off this side-door cottage industry that both gives students more time on tests and helps them do better once they are enrolled in elite universities. “Get ACT Extra Time,” reads one blunt web advertisement from the Cognitive Assessment Group. Dr. Wilfred van Gorp, who runs the company, says he assesses 24 patients a month and charges $6,000 per patient (amounting to $1.73M per year). About 70 percent of the patients he sees leave with a diagnosis. But Dr. Gorp has a checkered past himself. Dr. Gorp once testified that the Genovese crime boss Vincent Gigante was mentally impaired, though years later Mr. Gigante admitted that he had feigned his mental illness . Dr. Gorp says that he was tricked.

The shift began in 2008 when Congress amended the Americans with Disabilities Act (ADA) to restore the law’s original intent. The government broadened the definition of disability, effectively expanding the number of people the law covered. In response to the 2008 amendments, the Association on Higher Education and Disability (AHEAD), an organization of disability-services staff, released guidance urging universities to give greater weight to students’ own accounts of how their disability affected them, rather than relying solely on a medical diagnosis. Schools began relaxing their requirements. A 2013 analysis of disability offices at 200 postsecondary institutions found that most “required little” from a student besides a doctor’s note in order to grant accommodations for ADHD.

Source: The Times

Read more



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories