1642 stories
·
2 followers

AI Enables You To Do What You Already Can Do

1 Share
AI is a talent multiplier, not a substitute.
Read the whole story
mrmarchant
25 minutes ago
reply
Share this story
Delete

You paid for it, you should be comfortable in it

1 Share

A friend of mine bought a Tesla Roadster back in the early 2010s. At the time, spotting a Tesla on the road was a rare event. Maybe even occasion enough to stop and take a picture. I never got the chance to photograph one, let alone drive one, until I met this new friend recently. This was my chance to experience the car firsthand.

We walked to the parking structure to see it. As soon as he opened the door, something looked... off. On the outside, it was a pristine, six-figure roadster. But the inside looked completely custom. Not "custom" in the sense of a professional shop install, but more like the driver himself grabbed a hammer and chisel and made it his own.

First, the driver's seat had been altered. It was much lower than usual and didn't match the passenger seat. My friend stands 6'7", and the Roadster is a tiny car. He physically couldn't fit, so he modified the seat rails to lower it. But that fix created a new problem: the door armrest now dug into his hip. So, he took a file to the interior panel, shaved it down, and 3D printed a smaller, ergonomic armrest. He even 3D printed a cup holder for the passenger side so his coffee was within reach.

To me, the idea of taking a Dremel or a file to a $100,000+ car was unimaginable. You must be crazy to do it.

He caught the look on my face and shrugged. "Hey, it's my car. I paid for it. I intend to be comfortable in it."

I never thought of it like this. That sentiment stuck with me. Recently when I read an article by Kent Walters about filing the corners of his MacBook, those same feelings resurfaced. My work MacBook has edges so sharp that I've often felt like I was slicing my wrist on the chassis. I treated this as a design flaw I had to endure. But not Kent. He treated it as an obstacle to be removed. He literally filed down the corners of his laptop to ensure the machine he uses every day was comfortable.

I may not have the guts to file my work issued MacBook, but I'm no stranger to customization... in software. I modify my tools constantly. I spend days tweaking my IDE, remapping keyboard shortcuts, and writing custom scripts until the software is unrecognizable to anyone else on my team. I don't think twice about rewriting a config file to make the tool fit my brain.

When I was a kid, I always had a screw driver around, fixing a device that wasn't really broken. On the home computer, I modified everything. I once deleted all .ini files to improve performance. It didn't work, but it led to a fruitful career.

But somehow, when it comes to expensive hardware now, I freeze. I treat the physical object as a museum piece to be preserved. I bought a docking station to banish the laptop to a shelf, using an external mouse and keyboard to avoid touching the sharp chassis. I built a complex workaround to accommodate the tool, rather than performing the simple, brutal act of modifying the tool to accommodate me.

We treat our physical tools as if they are on loan from the manufacturer.

You'll see a musician buying a vintage guitar but refuses to adjust the action, terrified of ruining the "collector's value." Meanwhile, the working guitarist has sanded down the neck and covered it in stickers because it feels better in their hand. The software engineer accepts the default keybindings to avoid "bad habits," while the power user creates a layout that doubles their speed.

If you own a tool, whether it's a car, a computer, or a line of code, you own the right to change it. The manufacturer designed it for the "average" user, but you are a specific human with specific needs.

Remember grandma's couch in the living room? It had that plastic cover on it. It was so uncomfortable, but no one dared to remove it. The plastic was to preserve the sofa. No one got to enjoy it, instead everyone accommodated the couch only to preserve its value. A value that one ever benefits from. Don't let the perceived value of an object stop you from making it truly yours. A tool with battle scars is a tool that is loved.

Read the whole story
mrmarchant
37 minutes ago
reply
Share this story
Delete

To teach in the time of ChatGPT is to know pain

1 Share

I’ve been teaching college Earth science courses as a part-time faculty member for a long time now, all while juggling other jobs. I started because it was enjoyable; no one gets into this line of work for the famously poor pay or complete lack of job security. Working with students is just one of those genuinely fulfilling experiences that is addictive enough that they ought to warn people about it.

But thanks to generative AI, it has become mostly miserable―at least in certain settings.

For the last few years, I’ve been exclusively teaching asynchronous online courses, meaning recorded videos rather than live sessions. These have always been a bit more challenging than face-to-face classes, where you have a greater ability to keep the students on track. If a student doesn’t have to show up in a room for an hour at a scheduled time and no one can see their involuntary facial expressions when they don’t understand something, the probability increases greatly that they’ll just… fall off.

Read full article

Comments



Read the whole story
mrmarchant
55 minutes ago
reply
Share this story
Delete

An illustrated guide to resisting "AI is inevitable" in education

1 Share

1. Ask the AI-in-education enthusiast to clarify their premise.

(Slide created by Jane Rosenzweig, Director of Harvard College Writing Center)

2. Ask the AI-in-education enthusiast if they are familiar with recent research indicating that generative AI leads to widespread “cognitive surrender.”

Shot:

“Conceptually distinct from cognitive offloading, which involves strategically outsourcing a discrete task to an external tool (e.g., using a calculator), cognitive surrender represents a deeper abdication of critical evaluation, where the user relinquishes cognitive control and adopts the AI’s judgment as their own….

Across our studies, we observe that when System 3 [i.e., AI] was available, people readily engaged it and frequently adopted its answers. This shift reflects a reallocation of cognitive control rather than mere effort saving. System 3’s fluent, confident outputs are treated as epistemically authoritative, lowering the threshold for scrutiny and attenuating the metacognitive signals that would ordinarily route a response to deliberation. In the case of cognitive surrender, there is a shift in the locus of control, with an external system (System 3) occupying the default position.”

(Paper here https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6097646, with my emphasis)

Chaser:

(Paper https://arxiv.org/abs/2505.01106, BlueSky thread here)

3. If you feel the need to pile on with research, consider citing to this recent report from Stanford showing the complete lack of empirical research to support the use of AI in education.

“Research on how AI impacts K-12 students and educators is still extremely limited.

“As of October 2025 the AI Hub for Education Research Repository contained over 800 academic papers relevant to AI in K-12 education. Our review found that only a small subset (20 papers) produce strong causal evidence. Causal evidence provides the strongest basis for estimating how a tool impacts students and educators. The current causal research is still very limited: we did not identify any high-quality causal studies in K-12 settings in the U.S. for students and very few for teachers.

Students

  • Immediate gains with access: AI tools significantly improve student performance on math practice, programming projects, and writing tasks while students have active access to the technology.

  • Short-term boost, uncertain transfer: AI improves performance with access but when assessed independently without AI support, effects are mixed.

  • Easier doesn’t mean better: AI tools can alleviate students’ cognitive burden and foster positive experiences in learning, but can be at the expense of deeper thinking.

  • Pedagogical design matters: Tools designed with pedagogical guardrails (such as AI chatbots for tutoring that provide step-by-step reasoning instead of direct answers) show more promise than general purpose AI tools.

(source, my emphasis added re lack of causal evidence)

4. Ask the AI-in-education enthusiast if they are familiar with any of the recent efforts led by students pushing back hard on the intrusion of AI into their education.

a. In Pennsylvania

“Through its countless new programs and AI-centered events, Penn has positioned AI as an inescapable future that we all must accept in order to achieve success. There is no doubt that AI is part of the current occupational landscape, and we will certainly encounter it long after we graduate. Nevertheless, we attend this institution to develop hard skills, question the world around us, solve problems, produce new ideas, and the ability to think for ourselves. With the University forcing AI into our learning every chance it gets, do we end up gaining knowledge or cheat codes?

“The irony is that as Penn pours endless money and energy into AI advancement in its attempt to get ahead, the University is only quickening its own demise. AI cannot coexist with education — it can only degrade it. As technology advances and workers are replaced by machines, schools are some of the only places we have left to explore and wrestle with human thought. With our own university leading the charge, AI is now corrupting those few sacred spaces and leaving us with nowhere to engage in true scholarship.”

(Source, with my emphasis added)

b. In Colorado

“In an online survey conducted by Flynn Zook, a CU Denver student, nearly 300 respondents weighed in on the agreement. Fewer than 10 expressed clear support, while a small number said they were undecided. Many respondents raised concerns about environmental impact, intellectual property and the potential use of tuition dollars to fund the initiative.” (Story link)

c. In Ohio

(Source)

5. Politely point out that Sal Khan, perhaps the most prominent advocate for the capacity of AI to “revolutionize“ education, has recently changed his tune.

“‘For a lot of students, it was a non-event,’ Khan told me recently about his eponymous chatbot, Khanmigo. ‘They just didn’t use it much.’”

“Kristen Musall, a geometry teacher at Hobart High, gave Khanmigo a try when it first rolled out. Musall appreciated its encouraging, teacher-like tone, but she found that students didn’t really care for the bot….Musall no longer uses Khanmigo in her class. She says there’s been more enthusiasm for the product among administrators than teachers in her school.

“Kristen DiCerbo, the organization’s chief learning officer, said AI can only respond to students based on what they ask. And it turns out, she said, ‘Students aren’t great at asking questions well.’”

(From Chalkbeat’s Matt Barnum)

6. Direct the AI-in-education enthusiast to the PureGenius website to see if they get the joke.

(source; feel the future)

7. Ask the AI-in-education enthusiast if they are familiar with the broader pushback against the intrusion of education technology into schools led by educators and parents.

“McPherson Middle School, about an hour’s drive from Wichita, is at the forefront of a new tech backlash spreading in education: Chromebook remorse.

“Schools in North Carolina, Virginia, Maryland and Michigan that once bought devices for each student are now re-evaluating heavy classroom technology use.

“Now children’s groups and educators concerned about screen time are turning their attention to school-issued laptops and learning apps. Parents are flocking to support efforts, like Schools Beyond Screens and the Distraction-Free Schools Policy Project, to vet and limit school tech.

“Sarah Garcia, also 13, said spending less time online had prompted students to talk more. ‘Since we don’t have our Chromebooks in front of our face,” she said, “most people now interact with their, like, peers and stuff.’”

(source)

8. Gently remind the AI-in-education enthusiast that we have evidence in our own lifetime that highly addictive products marketed to children that cause serious harm are something we can address through policy and norms.

(source)

9. If the AI-in-education enthusiast has the audacity to cite f***ing AlphaSchool as counterexample and “proof of what’s possible,” liberally reference any or all the myriad reasons this is one of the most embarrassing possible arguments they could make.

“Former Alpha School employees told me that the company’s increasing reliance on generative AI in every aspect of its operation, as well as the constant monitoring and tracking of every student’s mouse movements, is making students anxious and does not always provide the quality of education Alpha School advertises to parents.

““All educational content is obsolete. Every textbook, every lesson plan, every test, all of it is obsolete because gen AI is going to be able to deliver a personalized lesson just for you,’ Joe Liemandt, Alpha School’s ‘principal’ and the founder of Trilogy, the company that owns many of the apps used by Alpha School, said in a podcast interview published last year.

“When a student requires help with additional questions, the chatbot fails to identify which specific question is being addressed,’ an internal Alpha School document outlining issues with AlphaRead says. ‘Accuracy of the content provided by the [AI] tutor is a concern. There are instances where it not only delivers incorrect answers but also provides convincing yet flawed justifications. Despite raising multiple queries about a particular answer, the chatbot erroneously confirmed an incorrect option as correct.”

(source)

From teacher Michael Pershan:

“Alpha is not trying to provide the best, most ambitious math or ELA education possible according to conventional understandings of that term. If they were, they’d keep studying ELA/math in the afternoon. Instead, their goal is to minimize the time spent on core academics while maximizing skills.

“This is unusual! This is not what most schools are trying to do!

“What’s most novel about Alpha School and Math Academy is their fundamental orientation towards K-12 schooling. The goal, quite expressly, is to minimize it and move on. Move on to what?”

(source)

From teacher Dylan Kane:

[Dylan explains how AlphaSchool measures its “success” by demonstrating how much faster students improve on the NWEA MAP, a mediocre-to-shitty assessment of student learning, compared to national average of students. In 2024-2025, using mediocre-to-shitty ed-tech product iXL, AlphaSchool students outgained the national average by 2.6x. Dylan reports on how students are doing this year using AI: on math, 2.5x, and on reading 2.8x]

“Again, let’s take these numbers at face value for a minute. Last year, the program was mostly iXL + a culture laser-focused on motivating students. There was little to no generative AI involved. This year, Alpha School has overhauled their academics, released their own platform, and incorporated generative AI throughout. They are finally doing what they say they are doing: AI-driven schooling. And the results are…more or less the same?

“This is completely fascinating to me. 100 million dollars, tons of hype on the internet, grand claims about the future of education. And the results haven’t budged from bribing kids to try hard on iXL?”

(source)

10. If all else fails, try appealing to the poetry of human existence. But don’t hold your breath.

(Source)


Our days are indeed precious on this earth. Today, the The New York Times published a long story about what happened to my father due to his reliance on AI for medical guidance. I am very grateful to reporter Teddy Rosenbluth for sharing his story with the world, and to all of you for your enduring support.


Subscribe now

Read the whole story
mrmarchant
56 minutes ago
reply
Share this story
Delete

Quoting Bryan Cantrill

1 Share

The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone's) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters.

As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don't want to waste our (human!) time on the consequences of clunky ones.

Bryan Cantrill, The peril of laziness lost

Tags: bryan-cantrill, ai, llms, ai-assisted-programming, generative-ai

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

That’s a Skill Issue

1 Share

I quipped on BlueSky:

It’s interesting how AI proponents are often like "skill issue" when the LLM doesn't work like someone expects.

Whereas when human-centered UX people see someone using it wrong, they're like "skill issue on us, the people who made this"

This is top of mind because I’ve been working with Jan Miksovsky on his project Web Origami and he exemplified this to me recently.

I was working with some part of Origami and I was “holding it wrong”. I kept apologizing for my misunderstanding and misuse. And Jan — rather than being like “Yeah, that’s a skill issue on your part, but you’ll get there” — his posture as tool-maker was one of introspection. He took the time to consider that perhaps the technology he was building was not properly aligning with my expectations as a user (or human-centered factors more generally). And he graciously explained that perspective to me, making me feel — well, not like an idiot.

My inability to find the results others claim with AI often has me saying either 1) “these claims are obviously BS”, or 2) “I guess it’s a skill issue on my part”.

And it kinda sucks to be saying (2) to yourself all the time, regardless of the technology.

A tech-centered approach treats the technology as a fixed point: if you don’t get what you want, you’re not using it right. The burden is entirely on you, the user, to learn the technology’s language.

Whereas a human-centered approach flips that: the technology exists to serve people as they actually are, not as we wish them to be. Confusion is allowed to be seen as a design failure, not a user failure.

What’s interesting is I think a lot of __insert technology here__ advocates would likely claim they’re “human-centered”. But when the response to failure is “learn the tech better”, it introduces a skill ceiling which naturally creates a priesthood of people who are “in-the-know” on how to make a technology work with the right incantation.

I’ve used AI as an example in this post, but it’s not really about AI specifically. This seems to be generally applicable, AI is just the current flavor.

I don’t have a big takeaway here. Just reflecting.

I love human-centered technology and technologists.


Reply via: Email · Mastodon · Bluesky

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete
Next Page of Stories