1728 stories
·
2 followers

The College Board’s New Method for Raising AP Scores

1 Share

by John Moscatiello

The College Board has released a preliminary explanation of what I have called the Great Recalibration of AP Exams. The report confirms that the central thesis of my article was correct: hundreds of thousands of AP scores have been intentionally “recalibrated” upward since 2022. While I avoided sensational headlines about score “inflation,” I called the increase in scores a “radical transformation” of the AP program. But I had no idea just how radical the transformation actually was until the College Board released this report. 

Apparently, the College Board has not just been raising AP scores since 2022, it has completely reinvented the methodology it uses to assign AP scores. And this methodology has been designed to achieve a very specific result. It ensures that virtually all AP Exams have “the same 60%–80% success rates” (i.e., AP scores of 3 or higher). This methodology only produces this pre-determined outcome. For most AP subjects, that means the scores stay the same, but for 14 AP subjects, (including many of the most popular subjects), it produces massive increases in scores. Whatever this methodology is, it never results in lower AP scores.

The new methodology, which the College Board calls “Evidence Based Standard Setting (EBSS),” was nowhere to be found on the College Board’s website until very recently, even though the report claims it has been used for the last three AP Exam seasons. As recently as May of this year, the pre-2022 scoring methodology was presented on the College Board’s website to explain how AP scores are assigned. In other words, the College Board’s public explanation of AP scoring has been inaccurate for the past three years. This is important because that page is virtually the only public source for this information, apart from some presentations at AP conferences and events.

How EBSS Works

This new EBSS methodology has added layers of complexity to an already opaque process. I am neither a psychometrician nor a statistician, so I am not qualified to comment on the 2013 study cited in the College Board report as the basis for this framework. But the version of EBSS methodology in the College Board’s report seems to connect all kinds of disparate data into a single framework. The report does not explain how all this data forms a coherent methodology, so we can only assess these individual data points on their own terms. 

The College Board’s report insists that the EBSS process “is especially well-suited for ensuring that AP standards and scores are not tugged higher by the well-documented increases in college grades over the past 30 years.” That seems like good news: the “inflation” of college grades is not causing an “inflation” of AP scores. But then this table appears, explicitly comparing college grades to AP scores with no explanation of how AP scores are not being influenced by higher college grades.

Then the College Board presents two more tables explicitly comparing AP scores to college grades. I don’t understand. Are college grades used to help determine AP Exam scores or not? How is college grade “inflation” not “inflating” AP scores? 

The EBSS methodology goes beyond comparing AP scores and college grades. Incredibly, the methodology uses 10th-grade PSAT scores to help compare college history classes to the AP U.S. History Exam. Besides the College Board’s own warning “not to ‘overuse’ test results” beyond their specific purpose, the connection between 10th-grade PSAT scores and college course performance is tenuous at best. And look at the column on the left: college grades are listed with AP scores with no explanation of how AP scores “are not tugged higher” by college grades.

The EBSS methodology sometimes incorporates data that isn’t really data. Everyone who has been to high school and college knows that high school students spend more time in the classroom than college students do. Do we really need a whole bar graph to tell us this? This has been true for the entire history of the Advanced Placement program, so why would scores be increasing now? This graph also reminds us just how different college courses and AP courses actually are: the instructional hours are different, the pacing is different, the assignments are different. How does this graph help account for a 24% increase in AP scores of 3 or higher in a single year?

Finally, there is an awkwardly written claim that seems to imply that the College Board is mining data from its AP Classroom platform to help establish AP scores. The report praises AP Classroom for providing “more granular and targeted student performance data that is now available within a very short operational window for analysts to utilize for identifying student performance at basic, moderate, and exceptional levels.” The College Board has conceded that some materials in AP Classroom are not well aligned to real AP Exams. AP teachers have been instructed not to use the personal progress checks (PPCs) in AP Classroom to inform classroom grades, but the College Board is using them (or other questions?) to inform actual AP scores? Hopefully, a future report can clarify whether student performance in AP Classroom is in any way influencing how actual AP scores are determined.

“Easier” Rubrics

The report includes a section about “easier” AP History rubrics. Apparently, there were so many scores clustered at the lowest end of the rubric that they had to lower the bar to create a wider distribution: “If all points on the rubric are equally difficult to obtain, the scoring process will not generate as much data about students at the novice and intermediate levels of performance as it will about the most advanced students.” This is a perfectly valid reason for changing the rubrics, but it raises a legitimate question about why so many students were clustered at the lowest end of the scoring scale in the first place.

In this section, we also learn that the “complexity” point on AP History rubrics “provided no measurement value” because it “was rarely used by graders.” This of course raises questions about the similar “sophistication” point on AP English rubrics, which only about 8% of students earn. By this reasoning, the sophistication point should be simplified too. Is the future of Advanced Placement one in which complexity is less complex and sophistication less sophisticated?

The Future of AP Scores

The College Board should be commended for releasing this report. It provides much more insight into the process than has been previously revealed. But AP teachers have rightly been confused and frustrated by the lack of transparency until this point. Why are we learning about this methodology for the first time three years into the process? 

And why has the process been dragged out over several years? The College Board presented data in 2021 that showed the need to raise AP English Language scores, yet that exam has not been recalibrated. This report has also confirmed that five more AP subjects (AP English Language, AP Environmental Science, AP Human Geography, AP Latin, and AP Physics 1) will also be recalibrated according to the EBSS methodology. We do not know when they will be adjusted, but at least we have clarity about what to expect in the coming years.

The future of AP scores is now becoming clearer. By 2025, the Advanced Placement program will look very different than it did just a few years ago. We now know that nearly all AP Exams will be digital in 2025. We know the answer to Chester Finn’s question “Are AP Exams Getting Easier?” The answer is yes. Between 2022 and 2026, approximately 1 million more AP Exams will receive scores of 3 or higher as a result of the College Board’s new method for raising AP scores. As a teacher who wants students to succeed and earn college credits, I welcome the change. As an observer of the standardized testing space for the past two decades, I am amazed that it has taken three years for us to learn anything about this process.


Follow us on Facebook or X for the latest updates on AP Exams.

John Moscatiello

John Moscatiello is the founder of Marco Learning. He has been a teacher, tutor, and author since 2002. Over the course of his career, John has taught more than 4,000 students, trained hundreds of teachers, written content for 13 test preparation books, and worked as an educational consultant in more than 20 countries around the world.

The post The College Board’s New Method for Raising AP Scores appeared first on Marco Learning.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Reality emerges

1 Share

Abstract digital artwork with multicoloured nodes and connecting lines on a brown background resembling a network or graph.

Particles are nature’s smallest constituents, but that doesn’t mean they’re fundamental. So what is the Universe made of?

- by Felix Flicker

Read on Aeon

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

AI didn't delete your database, you did

1 Share

Last week, a tweet went viral showing a guy claiming that a Cursor/Claude agent deleted his company's production database. We watched from the sidelines as he tried to get a confession from the agent: "Why did you delete it when you were told never to perform this action?" Then he tried to parse the answer to either learn from his mistake or warn us about the dangers of AI agents.

I have a question too: why do you have an API endpoint that deletes your entire production database? His post rambled on about false marketing in AI, bad customer support, and so on. What was missing was accountability.

I'm not one to blindly defend AI, I always err on the side of caution. But I also know you can't blame a tool for your own mistakes.

In 2010, I worked with a company that had a very manual deployment process. We used SVN for version control. To deploy, we had to copy trunk, the equivalent of the master branch, into a release folder labeled with a release date. Then we made a second copy of that release and called it "current." That way, pulling the current folder always gave you the latest release.

One day, while deploying, I accidentally copied trunk twice. To fix it via the CLI, I edited my previous command to delete the duplicate. Then I continued the deployment without any issues... or so I thought. Turns out, I hadn't deleted the duplicate copy at all. I had edited the wrong command and deleted trunk instead. Later that day, another developer was confused when he couldn't find it.

All hell broke loose. Managers scrambled, meetings were called. By the time the news reached my team, the lead developer had already run a command to revert the deletion. He checked the logs, saw that I was responsible, and my next task was to write a script to automate our deployment process so this kind of mistake couldn't happen again. Before the day was over, we had a more robust system in place. One that eventually grew into a full CI/CD pipeline.

Automation helps eliminate the silly mistakes that come with manual, repetitive work. We could have easily gone around asking "Why didn't SVN prevent us from deleting trunk?" But the real problem was our manual process. Unlike machines, we can't repeat a task exactly the same way every single day. We are bound to slip up eventually.

With AI generating large swaths of code, we get the illusion of that same security. But automation means doing the same thing the same way every time. AI is more like me copying and pasting branches, it's bound to make mistakes, and it's not equipped to explain why it did what it did. The terms we use, like "thinking" and "reasoning," may look like reflection from an intelligent agent. But these are marketing terms slapped on top of AI. In reality, the models are still just generating tokens.

Now, back to the main problem this guy faced. Why does a public-facing API that can delete all your production databases even exist? If the AI hadn't called that endpoint, someone else eventually would have. It's like putting a self-destruct button on your car's dashboard. You have every reason not to press it, because you like your car and it takes you from point A to point B. But a motivated toddler who wiggles out of his car seat will hit that big red button the moment he sees it. You can't then interrogate the child about his reasoning. Mine would have answered simply: "I did it because I did it."

I suspect a large part of this company's application was vibe-coded. The software architects used AI to spec the product from AI-generated descriptions provided by the product team. The developers used AI to write the code. The reviewers used AI to approve it. Now, when a bug appears, the only option is to interrogate yet another AI for answers, probably not even running on the same GPU that generated the original code. You can't blame the GPU!

The simple solution is know what you're deploying to production. The more realistic one is, if you're going to use AI extensively, build a process where competent developers use it as a tool to augment their work, not a way to avoid accountability. And please, don't let your CEO or CTO write the code.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Influential study touting ChatGPT in education retracted over red flags

1 Share

A study that claimed OpenAI’s ChatGPT can positively impact student learning has been retracted nearly one year after publication. The journal publisher, Springer Nature, cited “discrepancies” in the analysis and a lack of confidence in the conclusions—but not before the paper racked up hundreds of citations and made the rounds on social media.

“The paper's authors made some very attention-grabbing claims about the benefits of ChatGPT on learning outcomes,” said Ben Williamson, a senior lecturer at the Centre for Research in Digital Education and the Edinburgh Futures Institute at the University of Edinburgh in Scotland, in an email to Ars. “It was treated by many on social media as one of the first pieces of hard, gold standard evidence that ChatGPT, and generative AI more broadly, benefits learners.”

The retracted paper attempted to quantify “the effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking” by analyzing results from 51 previous research studies. Its meta-analysis calculated the effect size between various studies’ experimental groups that used ChatGPT in education and control groups that did not use the AI chatbot.

Read full article

Comments



Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

Another death in the AI-in-education family

1 Share

Well here we are, still reeling from the emotional devastation of Dan Meyer’s poignant eulogy for the recently deceased Khanmigo, only to have yet another tragic death in the AI-in-education family. Please join me, won’t you, in offering condolescences to the recent passing of ChatGPT’s “Study Mode,” and may its memory be a blessing.

What’s that, you don’t remember Study Mode? Tragedy on top of tragedy. You see, one year ago almost to the day, a relentless wave of negative press hit OpenAI as journalists discovered that students were using chatbots en masse to avoid effortful thinking, aka “to cheat.” As such, the company quickly announced a new product feature within ChatGPT that would ensure it would be used solely for educational good rather than evil: Study Mode. And OpenAI promised us that when Study Mode was activated, ChatGPT would “be engaging and interactive, and to help students learn something—not just finish something.”

There were, of course, some obvious challenges with Study Mode, as people quickly observed. For one thing…

…and for another…

Still, flaws notwithstanding, over the last year we could at least say that Study Mode…existed…as something that students seeking cognitive repentance might activate to curb their wicked cheating ways. That is to say, if they clicked the little “+” in the prompt box, there they could find Study Mode, waiting for activation such that—and here again I quote from OpenAI’s press release—students would “think critically about their learning.”

Alas, critical thinking is dead, and so too is Study Mode for all intents and purposes. Two weeks ago, I debated James Donovan, OpenAI’s Head of Learning & Cognitive Outcomes Research, regarding the role of chatbots in education. As our conversation kicked off, moderator Alex Grodd expressed some befuddlement related to his inability to find Study Mode, and Donovan confirmed it is no longer a tool that ChatGPT users can directly access in the “vanilla model” (that is, normal ChatGPT). Instead, ChatGPT will now allegedly “detect” when students are trying to study and then automatically refrain from providing answers. To which I say:

Movie gif. Mike Myers as Doctor Evil in Austin Powers looks a bit disappointed with pouty lips as he says, “Right…”

Donovan did suggest that for “B2B” users of ChatGPT in education—which apparently means countries such as Estonia that have partnered with OpenAI to embed ChatGPT in their education systems (shudder)—Study Mode will remain an option. Cool, that’s nice for Estonian kids, but not really relevant to the 400 million students worldwide who are using vanilla ChatGPT every day.1 Donovan, a smart and engaging guy, thus ended up arguing—based on secret OpenAI data—that students really are “cognitively augmenting” their education. You can judge for yourself how persuasive that is:

Study Mode was always a ruse, of course, a PR exercise masquerading as pedagogical safeguard, but while it’s admittedly fun to dance upon its grave, something bigger is happening at OpenAI regarding its education posture. Remember last year when Leah Belsky, OpenAI’s VP of Education, promised us an “education moonshot”? I think she’s still employed there, but the entire “moonshot division” at OpenAI appears to have run up the curtain to join the choir invisible as the company desperately tries to figure out how to generate revenue in advance of its suddenly somewhat dicey IPO prospects.

To wit: a few weeks ago Denise Dresser, OpenAI’s Chief Revenue Officer and de facto COO at present, issued an org-wide memo that leaked—and boy, it’s hard to see how education fits in to what they have planned going forward. Time for a brief fisking:

As we start Q2, I want to begin where we always should: with our customers. I have been spending time with leaders across our largest enterprises, most influential startups, and key venture firms. The message is clear. People are excited about what we are building, and they want a deeper view into our roadmap so they can plan with confidence and stay ahead of the market.

Enterprises, startups, and venture firms—but no mention of talking to students, who still comprise the majority of ChatGPT users (never forget this!). But students are not paying customers, and this memo is crystal clear about who really matters to OpenAI right now.

Enterprises buy business outcomes….They pay for higher revenue per employee, faster cycle times, lower support costs, and better execution.

Yep, this’ll map neatly to the culture of schools.

Our compute advantage sets us up to deliver continuous leaps in capability….Every step forward in compute lets us train stronger models, serve more demand, and lower the cost per unit of intelligence.

Oh yeah, teachers will love love love the idea of calculating the “per unit cost of intelligence.” Sign ‘em up.

The market has moved from prompts to agents. That shift is a massive opportunity for us.

Customers want systems that can reason, use tools, operate across workflows, and perform reliably inside real business environments.

Oh you don’t say! RIP, “prompt engineering,” long live AI agents, I guess. Does this mean we no longer need to provide professional development to teachers to improve their “AI literacy”?

I’ll remove the tongue from my cheek now, because I do believe this corporate reorientation “from prompts to agents” is a big deal, and I plan to write more about it soon. For now, though, I’ll just observe that while businesses may be lining up to have digital agentic hamsters scurrying about their databases, it’s not clear that normal humans share in that excitement. As teacher Stephen Fitzpatrick notes, “AI is being built for coders,” which is cool for them I suppose…but what about everyone else? Over to Elizabeth Lopatto in The Verge:

LLMs are, at best, an enterprise technology that may make certain kinds of data organization easier, or coding faster. This has almost nothing to do with most people’s lives. Dinking around with code is a hobby many tech people enjoy and one the rest of us simply don’t care about. Making it easier to write code doesn’t change that I don’t want to write code. I have other hobbies!

Me too, Liz, me too. And look, while I’m intellectually curious about AI, there is no part of me yearning to expose my private, personal data to AI agents. Not only do I not trust the technology, I’m not even sure what I’d want a coding agent to do. John Herrman with the Intelligencer recently ran into the same problem, with hilarious results:

Unfortunately, you must now confront the problem at the heart of every AI deployment, personal or corporate, fun or fatal, lark-driven or editorially minded: What is all this automation for?

This is a recurring theme when you try out new AI tools. You recognize that there’s a lot that might be done with them, but not much comes to you. You see this in the rise of AI coding tools, which you find extraordinarily impressive as you use them to … make yourself another … news reader? Notes app? Personal website, again?

You also dimly comprehend that in trying to understand your daily habits as a series of workflows with an eye on automation, you’re going through a similar set of motions as countless thousands of companies across the economy, some of whom see nothing but opportunity in AI — to cut costs and people, or to invest and grow — while others, fearing competition and obsolescence, rush to adopt AI without knowing what problems they need to solve, much less which ones the technology can handle. You identify on an emotional level with the doomed firms buying compute they don’t really know how to use.

Mercifully, humans are not firms, and students are not enterprises. And strangely enough, I’ll be (somewhat) relieved if OpenAI pivots to “B2B,” because it should mean less education malpractice from Belsky and team. As I note in the clip below, such harms are largely invisible in the moment, but eventually they will be made apparent. The good news is that concerned parents, students, educators, journalists, and others are observing this malpractice first-hand, and they do not like what they see.2

And they—we—are organizing to stop it.

Subscribe now

1

To be completely fair—cue Miracle Max voice—Study Mode is only mostly dead. I say that because when I reached out to OpenAI’s press team to confirm its demise, the company informed me it can still be activated by typing “/” in the prompt box. Weirdly, that seems true for the web-version of ChatGPT but not the mobile app. Of course it’s a moot point because no one knows about this option in the first place, and OpenAI’s press person did not respond to my further inquiries about why Study Mode is confined to this sad digital purgatory.

2

My thanks to Jodi Carreon for supplying this video snippet—and she’s one of the organizers helping to stem the EdTech tide in schools. More details here.

Read the whole story
mrmarchant
11 hours ago
reply
Share this story
Delete

Editing my LLM assisted Articles

1 Share

Last year, I used AI to help me write articles. As I've mentioned before, it's convenient when you are doing so because it saves you time. But the problem comes up when you try to quote those articles back. Whatever you think you wrote is not what's in there. I always cringe when I read them back.

As I've said before, I'm rewriting those articles so that they capture my voice, and so that I can actually quote the real thoughts I had in mind at the time of writing. I will show you exactly what the article looked like before and after. From prompt, to the final post, and the new edit.


Prompt: Feb 4th, 2025 on DeepSeek

I am writing a blog post, help me refine it . It should be a fun read that helps convince readers that building useless tools is part of the journey and career growth.

Programmers don't use calculators, they build them and abandon them 3 quarters completed.

The start of a project is always exciting. You have an brilliant idea, you have the skills to build it, and you have a blind spot. This blind spot propels you into a journey of excitement, setting up the project, deciding on frameworks, paying for AWS servers. You even get to test some of your ideas in a prototype version of your project.

As you keep building the project, and resolving the low hanging fruits, you realize that it takes more than the CRUD part to bring your project to life. This is where we start making excuses. The clearer the blind spot gets, the more we lose interest. Resistance is the secret to creating a collection of abandoned projects.

Early in my career, i found myself with a brand new life altering idea several times a week. My weekends were spent in front of a laptop, typing away on my IDE and testing an idea. I have to admit I often get distracted by creating meta code, like a ORM that should be ultra efficient, or a html parser that will allow me to use custom html tags that automatically render into components in real time. At the end, I have a robust framework with 3 quarters of an application that is getting less and less exciting.

But this is not a warning call to tell you your idea sucks and shouldn't work on it. It's a confession to tell you that those useless projects are exactly how developers hone their skills. We are tricked into wanting to change the world by our own mind. But the result is that we end up learning new marketable skills. I wrote about the importance of kids playing before, it helps them hone physical skills that you can't learn from a book. You have to climb the monkey bar at the park to develop the strength of your arms. In that same way, we need to build 3 quarters of a calculator to learn how to properly compartmentalize your util functions


DeepSeek "thought" for 38 seconds, and gave me a draft titled: "Why Building Useless Tools is the Secret Sauce of Developer Growth".

First Draft:

Title: Why Building Useless Tools is the Secret Sauce of Developer Growth

Hook:
Programmers don’t use calculators—they build them, abandon them three-quarters finished, and then accidentally invent the next big thing while trying to fix a typo in the README.


1. The Siren Song of a New Project

Every developer knows the rush: you’re struck by a “life-altering” idea (your 14th this week), and suddenly, you’re setting up repositories, debating frameworks, and buying AWS servers at 2 AM. The blind spot? You’re convinced this time, you’ll finish. Spoiler: You won’t. But that’s okay.

“The first 90% of the project takes 10% of the time. The remaining 10% takes another lifetime.”
—Ancient Developer Proverb (probably).


2. The Blind Spot: A Fuel for Creative Chaos

That initial blind spot isn’t a flaw—it’s rocket fuel. It lets you ignore the boring bits (user auth, error handling, documentation) and dive headfirst into the fun: prototyping wild features, inventing a “revolutionary” ORM named LazyLlama, or crafting custom HTML tags that’ll “change frontend development forever.” Sure, your enthusiasm fades faster than a free-tier server under load, but along the way, you’ve accidentally learned:
- How to optimize database queries (to make LazyLlama slightly less slow).
- Why shadow DOM exists (after your custom <sparkle-button> tag broke 14 legacy sites).


3. Resistance is Inevitable (and That’s the Point)

As the blind spot clears, reality hits: your app needs more than CRUD and caffeine to survive. Suddenly, you’re procrastinating by rewriting config files or alphabetizing your npm dependencies. This resistance isn’t failure—it’s a teacher. Every abandoned project is a masterclass in:
- Scope creep: “Wait, should the calculator also solve quantum physics?”
- Tooling: “I spent 3 days building a CI/CD pipeline… for a todo app.”
- Prioritization: “The login page can wait—let’s add animated emojis first.”


4. Playtime for Grown-Ups: How Useless Projects Hone Skills

Kids don’t climb monkey bars to “build upper body strength”—they do it because it’s fun. Likewise, developers don’t build half-finished tools to pad their résumés. We do it because playing with code is joyful. And just like kids, we walk away stronger:
- That abandoned Twitter-for-Cats app taught you websockets.
- The “ultra-efficient” ORM? Now you understand query optimization.
- The calculator left at 75%? You’ve mastered state management… and humility.


5. Embrace the Graveyard of Unfinished Ideas

Your GitHub is a museum of “almosts,” and that’s beautiful. Every skeleton in your code closet taught you something. Remember:
- Instagram started as a check-in app.
- Slack was a gaming company’s side hustle.
- Your Blockchain-Based Pet Rock project? It taught you smart contracts—and that not every idea needs to see daylight.


6. Keep Building (and Abandoning)

The next time guilt whispers, “You never finish anything,” laugh in its face. You’re not failing—you’re iterating. The world needs developers who’ve climbed the mountain of “useless” projects, survived the valleys of tech debt, and lived to code another day.

So go ahead: build that calculator. Abandon it at 75%. Then use what you learned to accidentally create something actually world-changing.

Final Thought:
The only thing better than a finished project? A dozen unfinished ones that turned you into a coding wizard.


Call to Action:
What’s your favorite abandoned project? Tag it #UselessButBrilliant and confess your coding sins. We’ve all got a LazyLlama in our closets.


I've made some edits, asked to remove the headings, and this is what I got:

Title: Why Building Useless Tools is the Secret Sauce of Developer Growth

Programmers don’t use calculators—they build them, abandon them three-quarters finished, and then accidentally invent the next big thing while trying to fix a typo in the README. Every developer knows the rush: you’re struck by a “life-altering” idea (your 14th this week), and suddenly, you’re setting up repositories, debating frameworks, and buying AWS servers at 2 AM. The blind spot? You’re convinced this time, you’ll finish. Spoiler: You won’t. But that’s okay.

That initial blind spot isn’t a flaw—it’s rocket fuel. It lets you ignore the boring bits (user auth, error handling, documentation) and dive headfirst into the fun: prototyping wild features, inventing a “revolutionary” ORM named LazyLlama, or crafting custom HTML tags that’ll “change frontend development forever.” Sure, your enthusiasm fades faster than a free-tier server under load, but along the way, you’ve accidentally learned how to optimize database queries (to make LazyLlama slightly less slow) or why shadow DOM exists (after your custom <sparkle-button> tag broke 14 legacy sites).

As the blind spot clears, reality hits: your app needs more than CRUD and caffeine to survive. Suddenly, you’re procrastinating by rewriting config files or alphabetizing your npm dependencies. This resistance isn’t failure—it’s a teacher. Every abandoned project is a masterclass in scope creep (“Wait, should the calculator also solve quantum physics?”), tooling (“I spent 3 days building a CI/CD pipeline… for a todo app”), and prioritization (“The login page can wait—let’s add animated emojis first”).

Kids don’t climb monkey bars to “build upper body strength”—they do it because it’s fun. Likewise, developers don’t build half-finished tools to pad their résumés. We do it because playing with code is joyful. And just like kids, we walk away stronger. That abandoned Twitter-for-Cats app taught you websockets. The “ultra-efficient” ORM? Now you understand query optimization. The calculator left at 75%? You’ve mastered state management… and humility.

Your GitHub is a museum of “almosts,” and that’s beautiful. Every skeleton in your code closet taught you something. Instagram started as a check-in app. Slack was a gaming company’s side hustle. Your Blockchain-Based Pet Rock project? It taught you smart contracts—and that not every idea needs to see daylight.

The next time guilt whispers, “You never finish anything,” laugh in its face. You’re not failing—you’re iterating. The world needs developers who’ve climbed the mountain of “useless” projects, survived the valleys of tech debt, and lived to code another day. So go ahead: build that calculator. Abandon it at 75%. Then use what you learned to accidentally create something actually world-changing.

The only thing better than a finished project? A dozen unfinished ones that turned you into a coding wizard.


I've retitled it, removed the headings from the draft, edited it down, and published this version:

Article Published on March 6th, 2025:

Why Developers Build Unfinished Calculators

  • It's the only way to learn

Every developer knows the rush. You are driving and suddenly you’re struck by a “life-altering” idea (your 14th this week). At the next red light, you record an audio while driving, avoiding eye contact with what clearly looks like a cop’s car. At 2 AM, you wake abruptly remembering the recording. Now you’re setting up repositories, debating frameworks, and buying AWS servers in the middle of the night. The blind spot? You’re convinced this time, you’ll finish.

Spoiler: You won’t. But that’s okay.

Programmers don’t use calculators. They build them, abandon them three-quarters finished, and then accidentally invent the next big thing while trying to fix a typo in the README.

That initial blind spot isn’t a flaw. It’s rocket fuel. It lets you ignore the boring bits (user auth, error handling, documentation) and dive headfirst into the fun: prototyping wild features, inventing a “revolutionary” ORM named LazyLlama, or crafting custom HTML tags that’ll “change frontend development forever.” Sure, your enthusiasm fades faster than a free-tier server under load, but along the way, you’ve accidentally learned how to optimize database queries (to make LazyLlama slightly less slow) or why shadow DOM exists (after your custom <sparkle-button> tag broke the browser's rendering engine).

As the blind spot clears, reality hits: your app needs more than CRUD and caffeine to survive. Suddenly, you’re procrastinating by rewriting config files or alphabetizing your npm dependencies. This resistance isn’t failure. It’s a teacher. Every abandoned project is a masterclass in scope creep (“Wait, should the calculator also solve quantum physics?”), tooling (“I spent 3 days building a CI/CD pipeline… for a todo app”), and prioritization (“The login page can wait—let’s add animated emojis first”).

Kids don’t climb monkey bars to “build upper body strength.” They do it because it’s fun. Likewise, developers don’t build half-finished tools to pad their résumés. We do it because playing with code is joyful. And just like kids, we walk away stronger. That abandoned Twitter-for-Cats app taught you websockets. The “ultra-efficient” ORM? Now you understand query optimization. The calculator left at 75%? You’ve mastered state management… and humility.

Your GitHub is a museum of “almosts,” and that’s beautiful. Every skeleton in your code closet taught you something. Instagram started as a check-in app. Slack was a gaming company’s side hustle. Your Blockchain-Based Pet Rock project? It taught you smart contracts and that not every idea needs to see daylight.

The next time guilt whispers, “You never finish anything,” laugh in its face. “You’ve got the wrong fellow,” you answer. You’re not failing, you’re iterating. The world needs developers who’ve climbed the mountain of “useless” projects, survived the valleys of tech debt, and lived to code another day. So go ahead: build that calculator. Abandon it at 75%. Then use what you learned to accidentally create something actually world-changing.

The only thing better than a finished project? A dozen unfinished ones that turned you into a coding wizard.


It sounds very much like any LLM, and I couldn't stand reading it. At the time, I was trying to save time with my heavy schedule of writing every other day for a whole year. But I ended with this. If you read it, it captures the idea I was trying to share.

As far as being functional, it did exactly what it was supposed to do. But it wasn't my human experience with the subject. In my new edit, I've removed things that do not sound like me. Phrasings that are awkward to me. I'm happy with the result. It's not a banger, but it captures my sentiment on why developers build calculators.

Read edited article here (May 1st 2026)

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories