1490 stories
·
2 followers

AI Can't Fix Student Engagement

1 Share
Discouraged boy with his head down on a textbook
Image ©Karola G via Canva.com

Powerful generative AI tools are suddenly everywhere, embedded in many of the learning platforms students use daily. For students, they offer an irresistible shortcut: why write the essay, solve the complex math problem, or read the chapter when a chatbot can do it for you in seconds? Schools across the U.S. are scrambling to adapt. Blue books are back. So too are in-class exams and No. 2 pencils. Running student work through anti-AI checkers is standard practice. These are all pragmatic strategies by harried educators who, along with families, are on the front lines, mediating the next tidal wave of technological innovation for their students.

But these practical solutions miss a major underlying issue. The majority of American students are disengaged at school — a trend that began long before generative AI arrived. According to the U.S. census, only one in three students are highly engaged in school, a number that has been stubbornly consistent over the last decade. And while 65% of parents believe their 10th graders love school, only 26% of students actually say they do.

Graph comparing how students feel about school vs how their parents think they feel

AI didn’t create this crisis, but it raises the stakes considerably. AI chatbots promise to reduce the “friction” of learning by teaming up with the student 24/7. But this friction isn’t a flaw that needs to be engineered away, it’s the whole point. The effort of working something out, of sitting with a challenge and finding a way through, is an essential part of the learning process. It’s what keeps students engaged, and engagement is both a prerequisite for real learning and a predictor of outcomes that reach far beyond the classroom, including higher graduation rates and life aspirations, and lower rates of depression and substance use disorder. In a world saturated with AI, the capacity to learn — to cultivate genuine curiosity, push through difficulty, and develop independent thinking — is the essential human skill. And it’s one we can still help students build.

The good news is that student engagement isn’t a mystery, and parents and teachers have more influence over it than they realize. When students have the agency and freedom to follow their own curiosity, engagement follows naturally. The key is knowing how to help kids get there.

Subscribe now

The Four Modes of Student Engagement

Academics agree that a combination of several elements shape student engagement:

  • What students do (e.g., showing up, turning in homework)

  • What students think (e.g., making connections between classroom learning and experiences out of school)

  • What students feel (e.g., showing interest in what they are learning and enjoying school)

  • Whether students take initiative (e.g., proactively finding ways to make learning more interesting, such as asking to write a paper on a topic they love versus the one that is assigned. This, in particular, is an essential skill in an AI-infused world.)

Because much of this is internal, it can be hard to see. So teachers and parents often rely on external behavior and outcomes as their gauges. But grades and attendance only tell part of the story — and they lead well-meaning parents to encourage compliance rather than real engagement.

That’s where a clearer framework helps. In our research for our recent book, The Disengaged Teen, we identified four distinct modes of student engagement and disengagement in school: Passenger, Achiever, Resister, and Explorer. These modes give teachers and parents language and a deeper understanding of where their students get stuck, and offer practical tools to help reignite their motivation.

  • In Passenger mode, students are coasting — doing the bare minimum. Parents of kids in Passenger mode often get a one-word response from their kids when they ask about school: “boring.” Students in Passenger mode may rush through homework and barely study for exams, yet some still get straight As. For these students, school can feel too easy, offering little challenge or excitement. Their coping strategy is to check out and focus on friends, gaming, sports — anything more interesting. For students stuck here, AI is an easy shortcut to finish homework faster and get back to hanging out.

  • Students in Achiever mode are trying to get a gold star on everything — academics, extracurriculars, service, you name it. They are driven and impressive but often exhausted. Fear of failure haunts these students, and many wilt when their performance dips even slightly. A common frustration among students in Achiever mode is, “My teacher didn’t tell me exactly what to do to get an A.” A B+ can trigger alarm, extra studying, late nights, and lost sleep. Achievers are focused on the end goal — the grade — not the learning process, and for many, cheating was already common before AI came along. Now, chatbots make it even easier to power through their pile of work.

  • Students in Resister mode use whatever influence they have to signal — to both teachers and families — that school isn’t working for them. Some actively avoid learning. Others disrupt their learning by derailing lessons and acting out, becoming the “problem child.” But they have something going for them that students in Passenger mode do not: agency. They aren’t taking their lot lying down, they are influencing the flow of instruction, though in a negative way. If given the chance, we found that those in Resister mode can move to Explorer mode — the final and most engaged mode — more quickly than students stuck in Passenger or Achiever mode.

  • The peak of the engagement mountain is Explorer mode, where students develop the willingness and desire to learn new things. Here students’ agency meets their drive. Their involvement runs deep and they find meaning in the effort required to learn. Explorer mode includes the active curiosity Jonathan Haidt calls “Discover Mode.” Students in Explorer mode feel confident enough to take creative risks, generate their own ideas, and solve problems in the classroom. When a student in Explorer mode is asked “How was school?,” their answer is not a monosyllabic “fine” but an excited breakdown of how tornados work or how they calculated Taylor Swift’s net worth using newly acquired math skills.

A key way to build engagement is to give students some autonomy in the classroom. Across 35 randomized controlled trials in the U.S. and 17 other countries over three decades, when teachers give students opportunities to engage by having a small say in the flow of instruction — such as choosing among homework options, providing feedback at the end of a lesson, or asking questions about their curiosities — learning, achievement, positive self-concept, prosocial behavior, and numerous other benefits increase. To develop initiative, which builds agency, kids need to practice it. For that they need to get into Explorer Mode.

In academic terms, this is agentic engagement — the ability and desire to initiate learning, express preferences, investigate interests, solve problems, and persist in the face of challenges. It is the foundation of a meaningful life and the life skills required to navigate an AI-saturated world.

The four modes of student engagement grid, agency vs engagement plotted

The modes of engagement are dynamic: students move around them all the time based on their environments, their “efficacy” — i.e., how successful they think they can be — and their emotions. When students are given the freedom to explore, they often take it, and with it they can develop agency. Parents and educators also play a massive role influencing what mode kids show up in, often without knowing it.

Share

The Case of Kia

Kia, one of the students we interviewed for our book, is a classic example of how agency can pull a student back from the disengagement brink. In elementary school, she was vibrant, reading incessantly and debating Percy Jackson plot points with her dad. But by middle school, she was bored, stifled, and completely checked out — stuck squarely in Passenger mode.

Worried about her profound disengagement, a creative teacher tried an unusual strategy: he invited her to join a learner advisory panel to tell the school board what it really felt like to sit at a desk all day. As Kia put it, her brain flipped from “This is useless, and I hate everything,” to “Hold on, maybe I have a say.”

A key way to build engagement is to give students some autonomy in the classroom.

At home, her father — who had gone straight to work after high school — refused to let her intellect become dormant. He treated her questions as worthy, whether she was asking why water towers are round or for a definition of “pedagogy.” He treated her like a thinker even when school made her feel like a failure.

When her school eventually introduced “studios,” where students design their own projects, Kia leaned into her love of storytelling, creating a podcast on mythology and an escape room about presidential assassinations. That agency rewired her, allowing her to fully enter Explorer mode. Even when she later landed in dry, lecture-heavy college classes, she still thrived. “I learned that you can learn anything. You just have to know how you work and how to teach yourself.”

This shift from compliance to choice, from helplessness to agency, supported by her dad and her teachers, took Kia from Passenger mode to Explorer mode and helped her rediscover the curiosity and drive she had in elementary school.

Leave a comment

The Exploration Gap

Our Brookings–Transcend study found that fewer than 4% of students in middle and high school regularly had in-school experiences that supported Explorer mode.

The shift in student engagement during the transition from 5th to 6th grade — when most students in the U.S. enter middle school — is striking. More coasting, less achieving, more resisting, and less exploring all characterize the move from elementary to middle school.

Why does this happen? One key factor is lack of agency and a school system that, for most learners, undermines Explorer mode. At a moment of peak brain development, when young people seek meaning about themselves and the world, many are shuffled through classes like factory workers, pounded with content that feels standardized and irrelevant, and pressured to win a race they don’t want to run. Despite all the energy they expend, they increasingly feel like they have little say in how they spend their days.

Some have proposed that generative AI could unlock students’ motivations and interests. But recent global Brookings Institution research examining the benefits and risks of AI on student learning found that current use — especially open-ended discussion with AI chatbots and AI “friends”— undermines students’ cognitive development, motivation to learn, and, ultimately, engagement with the material.

The solution is not a better algorithm. It’s centering human connection, creativity, and student agency, which parents and teachers are uniquely positioned to do.

Six Actions to Get Kids into Explorer Mode

When kids are in Explorer mode they are motivated to learn, and motivation is crucial. “Biology doesn’t waste energy,” says Mary Helen Immordino Yang, a psychologist and neuroscientist at the University of Southern California. “We don’t think about things that don’t matter.” Students are motivated by having authentic opportunities to contribute and learn meaningful things.

Families and schools can work together here. When adults at home and educators in school work collaboratively, kids’ outcomes routinely improve. Schools are ten times more likely to improve when this strong collaboration exists. Today, there are six actions families and schools can take to help our children have more Explorer moments.

1. Model the thrill of learning.

“Curiosity is contagious,” writes author Ian Leslie. “So is incuriosity.” It may sound simple, but one of the most powerful ways parents at home can support student engagement is by letting their kids see them having Explorer moments of their own.

John Hattie, a professor from the University of Melbourne, calls this being your child’s “first learner,” namely modeling the thrill of learning in everyday activities. This, much more than parents’ well-intentioned hovering around homework completion, helps students engage and do well in school. When Hattie examined the effects of parental involvement on student achievement across almost two thousand studies covering over two million students around the globe, he found “[w]hen parents see their role as surveillance, such as commanding that homework be completed, the effect size is negative.” In other words, parental nagging and controlling makes things worse, not better.

2. Know your child, know their mode.

Parents and educators will have more success getting their kids and students into Explorer mode if they truly know the child they have in front of them, and tailor their support accordingly. The modes are dynamic, and kids move between them, but when kids get stuck in one mode it can become an identity. The kid in Passenger-mode becomes the “lazy kid,” the kid in Achiever-mode the “smart kid,” the kid in Resister-mode the “problem kid.”

Kids don’t need a label, they each need a slightly different nudge to help move them into Explorer mode. The Engagement Toolkit in our book provides a host of practical strategies unique to each mode. For example, the kid who frequently procrastinates because they’re stuck in Passenger mode may need help developing study and planning skills. The student deep in Achiever mode may need help learning that failure is not the end of the world, something they can develop by taking small risks. Kids in Resister mode often need help developing a pathway out of the rut they are in — what Daphna Oyserman calls a vision of a “future possible self — as well as a plan to get there.

3. Support ways for young people to make authentic contributions.

Adolescence is a period of profound opportunity as well as vulnerability. Teens ask important questions like “Who am I in the world? What matters to me? Do I matter? What kind of future can I build?” They need actual experiences to get data on the answers and build the muscles of being a respected contributor to a community. Cultural anthropologists call the process of gaining that status “earned prestige.” Too often young people default to social media to seek this status. Families and schools can both counteract that by giving kids opportunities for real-life contributions. When families rely on students to help get dinner made, bedrooms cleaned, the dog walked, or a meal delivered to an elderly neighbor, it helps young people know they matter for more than their latest test grade.

At school, when kids engage in experiential projects like “What happened in that abandoned factory?” or “Who lies beneath a headstone marked with only a single letter?” — as they did in Amenia, New York — they do work that matters in their communities. Connecting learning experiences to real life, from asking questions about how their local economy works to their community’s cultural norms, gives students opportunities to make meaning of their assignments while learning about themselves and their place in the world.

4. Never take away extracurriculars because of poor academic performance.

Too often, schools make participation in extracurricular activities contingent on good grades. A nationally representative survey found that nearly 60 percent of students earning all As participate in arts-related extracurriculars, compared to just over 30 percent of students with Cs and Ds. A similar divide exists in sports. The logic may seem sound — a child struggling with algebra doesn’t need more time on the basketball court. But this approach is misguided.

Struggling students need something to be excited about: a place to explore, connect, and shine. When students discover their “spark,” as the late youth advocate Peter Benson called it, that passion can sustain them through life’s ups and downs and boost engagement in school. If that spark is athletics or theater or music and that gets taken away because of academic performance, most kids in Passenger or Resister mode will not suddenly shift to Explorer mode so they can participate again — they’ll become even more disengaged. To foster more Explorer moments, schools should make extracurricular participation contingent on attendance and positive behavior — not grades.

5. Help students manage technology.

Another key is helping students escape the seductive power of tech distractions. Parents can set limits for children’s technology use at home, which can help ensure they are well rested enough to tap into their curiosity and creativity. They can advocate that their students’ school and community limit cell phone and social media use, as Jonathan Haidt recommends in The Anxious Generation. Parents can also check their own tech use at home. Children learn from what we do, more than what we say.

6. Hold a workshop or book group on the four modes of engagement.

You can’t fix what you can’t see. Only a third of 10th graders report having opportunities to develop their own ideas, compared to 69% of parents who think they do. A key aim of our book is to make the invisible — learning and engagement — visible, so we can develop strategies to improve it.

According to our research, teachers and parents want language and tools to talk about learning without creating friction. The four modes can unlock the conversations we need to have with our children and students. Parents can suggest this as a topic for a parents’ evening, a school discussion with educators, or a topic for back-to-school night. So, whether it’s a book club (we have a free study guide) or a webinar, workshop, keynote, or focus group, sparking dialogue about student engagement is a great first step to boosting it.

Conclusion

Despite its powerful role in learning, student engagement is rarely at the center of education discussions. Instead, grades and attendance dominate the conversation. Our hyper-focus on outcomes over inputs is a mistake. Instead of fixating on the leaves of the tree — test scores and grades — we need to tend to the roots, the invisible network of curiosity and motivation that will keep the tree growing.

In an increasingly AI-saturated world, Passenger mode is a seductive trap — a path where thinking is outsourced and agency atrophies. More than ever, we must prioritize student engagement and treat Explorer mode not as a nice-to-have, but as an essential life skill. That means giving kids the agency to follow their genuine curiosity, take creative risks, and find meaning in their own learning. Every student is capable of Explorer mode, we just have to help them get there.

After Babel is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Read the whole story
mrmarchant
11 minutes ago
reply
Share this story
Delete

I Don’t Believe This Finding That AI Is Saving Teachers Six Weeks per Year

1 Share

EdTechnical is currently announcing the winners of their forecasting competition, where entrants made predictions about five different questions in education. I was a judge for the “Teaching Profession” track which asked:

By the end of 2028, what percentage of non-interpersonal teacher activities (lesson planning, grading, and parent communication) will teachers routinely delegate to AI systems?

My own prediction here would have landed somewhere quite a bit south of 50%, mostly because I think the interpersonal and non-interpersonal tasks of teaching are pretty tough to disentangle. Wess Trabelsi won with a prediction of 65%. (Congrats, Wess. I liked your entry and described it to the other judges as “a wild ride.”)

A headline from Gallup. Unlocking Six Weeks a Year With AI.

Nearly every entrant, including Wess, relied on a survey from the Walton Family Foundation and Gallup that surveyed teachers on their AI use and time saved. The big finding of the report:

Teachers who use AI weekly save 5.9 hours per week — the equivalent of six weeks per school year. Currently, about three in 10 teachers are using AI at least weekly, with more frequent users experiencing greater time savings.

I want to offer a few reasons why I suspect that finding is wrong. In fact, if it is anywhere close to right, if this technology (which I regard as “neat”) can effectively shave off six weeks of teaching work (which I regard as some of the most taxing that exists) then I have drastically misunderstood AI or teaching or both.

Here is why I’m skeptical of the finding.

People aren’t great at self-reporting the time they save with AI.

The Gallup data is entirely self-reported. Teachers were asked how often they used AI and how much time they saved on different teaching tasks. Meanwhile, the AI research firm METR went beyond self-reports with a similar study of software developers. They randomized a set of tasks between “AI allowed” and “AI disallowed” groups. They asked for self-reports of completion time, but they also measured actual completion time.

A graph showing "Experts and Study Participants Misjudge AI Speedup."

The AI-using engineers believed their tasks had taken them 20% less time than the non AI-using engineers when in reality it had cost them 19% more time. So I’m happy the AI-using teachers feel like AI has shaved off six weeks of their work, but the METR study should make us question self-reported data of this sort.

These sample sizes are quite small.

A graph showing the frequency of using AI tools for various teaching tasks.

The most common response to the question, “How often do you use AI tools to do this teaching task?” was “never” for every teaching task surveyed. Only 4% of the 2,232 surveyed teachers described using AI weekly to “analyze patterns in students’ learning,” for example. That amounts to 90 teachers whose responses were averaged across other categories and then “multiplied by the number of contracted weeks per year (37.4, on average)” to get the time savings expressed as weeks. “Margins of error for subgroups are higher,” says the methods section, which I believe. Unfortunately, those margins were then multiplied by 37.4.

Okay also: saving time is not per se good!

Students seem to be saving tons of time these days using AI. Yet plenty of adults are more worried than excited about those savings, worried that students are saving time not doing work that they should do—not writing their early drafts of papers, not struggling to remember which solution method is most appropriate, not committing knowledge to memory.

We should worry similarly about teaching. Are we happy or sad that teachers are outsourcing their tasks to AI? Which tasks? It’s true that teachers can save time giving feedback on student writing by asking an AI to do it. If saving time were our only prerogative, we could save even more time by simply not assigning essays at all. Clearly, the more stuff AI can do, the more we need to wrestle with the question, “What stuff should AI do? Are there costs besides time that we should consider?”

Thanks for reading ! I write a new email about teaching, technology, and math on special Wednesdays. Throw your email in the box. 👇

Featured Comments

Michelle Kerr on last week’s teaching dilemma:

The tricky part isn’t explaining [whether the answer is “6-n” or “n-6”]. It’s getting them to stop and think about it before they answer. This is also the case with whether the slope is positive or negative or whether (x+3) has a zero at 3 or -3. It’s always about getting them to stop and take a beat before they answer instead of just mentally flipping a coin. The hard part is reminding them that they have the knowledge already if they just take a second to stop and tap into it.

Ryan Muller believes the MOOC comparison doesn’t account for AI’s vastly different technological power:

The scaling that has unrelentingly continued to extend its capability on ambiguous and long running cognitive tasks hits a brick wall against... teaching?

Just so I’m not misunderstood: yep, that’s exactly what I think. Humans have—to date!—needed other humans to help them do hard things they’d otherwise rather not do. So you can file teaching alongside therapists and gym trainers as professions where the impact of AI is going to be limited compared to the rest of the economy.

Upcoming Presentation

  • March 13. This month I’ll be giving a keynote address at California’s annual math conference in Bakersfield, CA. We’ll be talking about why and how we can restore creativity to math, a discipline frequently perceived as a creative wasteland.

Odds & Ends

¶ Congrats to Amplify’s math curriculum team for earning an all-green rating from EdReports for Amplify Desmos Math, a curriculum that is near and dear to my heart.

¶ You simply have to hand it to Pam Burdman sometimes. Perhaps you recall the thunderclouds above UC San Diego’s math results recently. The Wall Street Journal’s editorial board called it “A Math Horror Show” attributing increased enrollment in remedial math to “increased admissions of students from ‘under-resourced high schools.’” (UC San Diego made the same attribution.) But Pam used the Wayback machine and found another factor contributing to that increased enrollment, one which I haven’t seen reported anywhere else.

Rick Hess interviewed me for Education Week. One Q&A:

RH: What’s the one big thing a school or system leader should know when it comes to AI and schooling?

DM: The introduction of generative AI has not changed the fundamentals of your work: getting absentee students back to school; making sure kids feel supported and known; communicating results and challenges with parents; creating a positive working environment for teachers; and keeping kids safe. For all of its power in the world outside of schools, generative AI has not transformed the reality of any of those challenges and may, in the case of student mental health, exacerbate them. The work is still the work.

¶ I love the lines John Warner quoted from How College Works.

The fundamental problem of higher education is no longer the availability of content, but rather the availability of motivation.

Human contact, especially face to face, seems to have an unusual influence on what students choose to do, on the directions their careers take, and on their experience of college. It has leverage, producing positive results far beyond the effort put into it.

If human contact is a lever, we can think of technology as one kind of fulcrum. We see different results for kids depending on where we put that fulcrum and what kind of force we apply to the lever.

A Michael Pershan tweet: Personalized learning has been the dream of edtech for the last idk 100 years, since at least programmed radio instruction. The mistake boosters keep making is thinking the problems are technical and solvable...the bigger issue is that fundamentally most people do not want this.

Michael Pershan is correct: “most people do not want this.” Downthread he writes:

Most kids are not sitting around wishing that they could just hunker down with a tutor. They would rather be in class with their friends.

You can try to convince people to want something they do not want. You can yell into the wind. You can swim against the river. But the most prominent boosters of personalized learning do not seem to engage meaningfully with the nature of the wind or the river, with the reality that most people are not like them, that most people do not want this.

¶ Related: Ben Kornell ran an interesting webinar on the state of AI tutors. Notably, Khan Academy’s Kirsten DiCerbo said, “I will tell you, we see more ‘IDK IDK,’ more passive kinds of interaction than we would like.” That checks out. Andrea Pasinetti, CEO of Kira, offered a useful summary of the “micro-interactions that happen between students and tutors that can pull students out of the illusion of tutoring.”

Any latency. A question of two to three seconds delay between when a student asks a question and when a student gets a high-quality answer can cause distraction, it can cause a student to move on, it can cause a student to lose their own context in terms of what they’re learning and why they’re asking a particular question. An incorrect answer. A duplicative answer. An answer that maybe is not leveled properly in terms of reading level or concept difficulty or an answer that presumes knowledge that the student doesn’t have can all have a very negative impact on engagement and ultimately learning outcomes.

¶ I gave a talk at UC Davis last week and loved seeing the Better Call Saul-style advertisements for legal advice for (alleged but let’s be real) AI plagiarists on the bulletin boards.

A poster that says "Suspected of AI Plagiarism" and offers advisory services for those students.
Read the whole story
mrmarchant
12 minutes ago
reply
Share this story
Delete

When Technology Replaces Teaching

1 Share

I was on the Teachers’ Aid podcast talking about mini whiteboards with some thoughtful folks. You can listen here.

I recently cut out student-facing technology from my teaching. Students do not use their Chromebooks at all in my class. I wrote about a bunch of the details a few weeks ago. Today I have a broader reflection on technology and teaching.

A Mental Model

Here’s a mental model implied by a lot of discourse I see about classroom technology.

The teacher makes decisions about what technology to use. The technology maybe helps students learn, maybe not, and maybe has some negative side effects on attention, etc. In this mental model, the focus is the arrow on the right. Education technology made all sorts of promises in the last ten years. Most haven’t panned out. Screens were supposed to help students learn but didn’t live up to that promise. I’ve written about some of that here, and also here.

In this model, the problems with technology are solvable. Maybe we need better education technology. Maybe we need to use it differently. That was my approach in my classroom before I went tech-free. We spent most of our time doing math with paper and pencil or whiteboard and marker. I used technology for a few specific purposes, and felt like I was getting some of the benefits without the drawbacks. I could fiddle with the technology I chose, gradually technology got better, and maybe eventually we’ll reach the promised land of tech-driven learning utopia.

Here’s a different mental model.

In this model, the technology has those same effects on students. But it also has an effect on the teacher. Technology isn’t neutral. When I have students pull out Chromebooks, technology changes my behavior.

When Technology Replaces Teaching

Never trust anything that can think for itself if you can’t see where it keeps its brain. - Arthur Weasley

Go talk to some regular students and regular parents about regular schools. You’ll often hear one theme: some of the teachers don’t teach.

Day after day, students show up to class, the teacher says, “open your Chromebooks and start lesson 3,” and the teacher watches the students work, or something along those lines.

Arthur Weasley warned us about this in Harry Potter and the Chamber of Secrets. He was talking about Ginny spilling all of her secrets to an enchanted diary, but it’s the same idea. When an object seems smart, humans tend to trust it to make decisions for us. We live in the era of “artificial intelligence.” We are constantly told that the singularity is near, that the computers are smart enough to take our jobs. So it makes perfect sense that teachers, when students open up those Chromebooks, kindof go on autopilot. The job shifts: rather than driving instruction, the teacher’s job is to assign the lesson through some software, and then to supervise the students and remind them to keep working.1

I think teachers stop teaching for a few different reasons. It’s in part a product of some vaguely progressive ideas that are in the water. Teachers should be a guide on the side, not a sage on the stage. Students should be doing the cognitive heavy lifting. With that mindset, it’s easy for the teacher to step back and let the computers do the work. This also happens because we can’t see where the computer or app or whatever keeps its brain, and we assume these things are smarter than they are.

And to be clear, there have always been some teachers who don’t teach. Before classroom technology it happened with packets.

My hypothesis is that classroom technology encourages this type of teaching (or, more accurately, not teaching). I’m not immune to this. I could feel the pull when I had students got out their Chromebooks. Teaching is tiring. I get students started, I go to my desk, and I watch little icons move through the lesson or percentages tick upward as students work. It seems like learning is happening. Maybe I feel like I need to monitor those dashboards, or make sure students are on the right website. There’s this gravity pulling me to stare at my screen as students stare at theirs. That gravity is powerful. That’s the biggest thing I learned from my tech-free experiment. I am not immune to the pull. Technology is my gateway drug to not teaching.

I surveyed my students after the tech-free experiment. One question was about how technology helps students learn. A student’s response:

Because it’s AI and it’s smart.

That type of thinking is everywhere right now. We assume technology is smart. We assume we should defer to the decision-making of the machines. I reject that. I am keeping technology out of my classroom in part for practical reasons: attention fragmentation, logistical headaches, and more. But I am also keeping technology out because of a moral conviction I’ve arrived at: technology is not a neutral tool. Technology changes teacher behavior. I want that out of my classroom as much as possible.

To Summarize

Here’s my summary of the education technology landscape right now:

There are already some interesting use cases for AI to reduce teacher workload. I’m sure more are coming. That’s cool! On the student side, there has never been a better time to be a self-motivated learner. If you want to learn something and you can manage your own time and effort, the current technological resources are fantastic. One reason we invented school is that not all young people are self-motivated. Many of those young people benefit from a regular routine, going somewhere with a consistent schedule and a bunch of peers of more or less the same age who learn the same things. In that context, classroom technology is at best a supplement to the human-to-human interaction that drives learning. In the classroom context, technology also affects teacher decisions and can convince teachers they don’t need to teach, or just make it easy to sit back and do a bit less. I’m writing this from personal experience — I’ve been that teacher!

Look, I read all the same headlines everyone else does. We’re on the cusp of a world-changing transformation driven by AI. Maybe! But edtech is not there. Call me when it happens. I will look at all the new technology that comes out. I’ll take it seriously. In the meantime, looking at what I have access to today, it’s not for me.

1

One practical note: reminding students to keep working is an unfortunate but inevitable side effect of using student-facing technology in the classroom. It’s much easier for students to “hide” behind a screen and look like they’re busy without doing much of anything, or to find any number of ways to distract themselves from doing the academic work in front of them. That means a lot of reminders, and a lot of teacher energy consumed with reminding students and not thinking about student learning.



Read the whole story
mrmarchant
19 hours ago
reply
Share this story
Delete

The one science reform we can all agree on, but we're too cowardly to do

1 Share
photo cred: my dad

If you ever want a good laugh, ask an academic to explain what they get paid to do, and who pays them to do it.

In STEM fields, it works like this: the university pays you to teach, but unless you’re at a liberal arts college, you don’t actually get promoted or recognized for your teaching. Instead, you get promoted and recognized for your research, which the university does not generally pay you for. You have to ask someone else to provide that part of your salary, and in the US, that someone else is usually the federal government. If you’re lucky—and these days, very lucky—you get a chunk of money to grow your bacteria or smash your electrons together or whatever, you write up your results for publication, and this is where the monkey business really begins.

In most disciplines, the next step is sending your paper to a peer-reviewed journal, where it gets evaluated by an editor and (if the editor sees some promise in it) a few reviewers. These people are academics just like you, and they generally do not get paid for their time. Editors maybe get a small stipend and a bit of professional cred, while reviewers get nothing but the warm fuzzies of doing “service to the field”, or the cold thrill of tanking other people’s papers.

If you’re lucky again, your paper gets accepted by the journal, which now owns the copyright to your work. They do not pay you for this! If anything, you pay them an “article processing charge” for the privilege of no longer owning the rights to your paper. This is considered a great honor.

The journals then paywall your work, sell the access back to you and your colleagues, and pocket the profit. Universities cover these subscriptions and fees by charging the government “indirect costs” on every grant—money that doesn’t go to the research itself, but to all the things that support the research, like keeping the lights on, cleaning the toilets, and accessing the journals that the researchers need to read.

Nothing about this system makes sense, which is why I think we should build a new one. In the meantime, though, we should also fix the old one. But that’s hard, for two reasons. First, many people are invested in things working exactly the way they do now, so every stupid idea has a constituency behind it. Second, our current administration seems to believe in policy by bloodletting: if something isn’t working, just slice it open at random. Thanks to these haphazard cuts and cancellations, we now have a system that is both dysfunctional and anemic.

I see a way to solve both problems at once. We can satisfy both the scientists and the scalpel-wielding politicians by ridding ourselves of the one constituency that should not exist. Of all the crazy parts of our crazy system, the craziest part is where taxpayers pay for the research, then pay private companies to publish it, and then pay again so scientists can read it. We may not agree on much, but we can all agree on this: it is time, finally and forever, to get rid of for-profit scientific publishers.

MOMMY, WHERE DO SCAMS COME FROM?

The writer G.K. Chesterton once said that before you knock anything down, you ought to know how it got there in the first place. So before we show for-profit publishers the pointy end of a pitchfork, we ought to know where they came from and why they persist.

It used to be a huge pain to produce a physical journal—someone had to operate the printing presses, lick the stamps, and mail the copies all over the world. Unsurprisingly, academics didn’t care much about doing those things. When government money started flowing into universities post-World War II and the number of articles exploded, private companies were like, “Hey, why don’t we take these journals off your hands—you keep doing the scientific stuff and we’ll handle all the boring stuff.” And the academics were like “Sounds good, we’re sure this won’t have any unforeseen consequences.”

Those companies knew they had a captive audience, so they bought up as many journals as they could. Journal articles aren’t interchangeable commodities like corn or soybeans—if your science supplier starts gouging you, you can’t just switch to a new one. Adding to this lock-in effect, publishing in “high-impact” journals became the key to success in science, which meant if you wanted to move up, your university had to pay up. So, even as the internet made it much cheaper to produce a journal, publishers made it much more expensive to subscribe to one.

Robert Maxwell, one of the architects of the for-profit scientific publishing scheme. When he later went into debt, he plundered hundreds of millions of pounds from his employees’ pension funds. You may be familiar with his daughter and lieutenant Ghislaine Maxwell, who went on to have a successful career in child trafficking. (source)

The people running this scam had no illusions about it, even if they hoped that other people did. Here’s how one CEO described it:

You have no idea how profitable these journals are once you stop doing anything. When you’re building a journal, you spend time getting good editorial boards, you treat them well, you give them dinners. [...] [and then] we stop doing all that stuff and then the cash just pours out and you wouldn’t believe how wonderful it is.

So here’s the report we can make to Mr. Chesterton: for-profit scientific publishers arose to solve the problem of producing physical journals. The internet mostly solved that problem. Now the publishers are the problem. These days, Springer Nature, Elsevier, Wiley, and the like are basically giant operations that proofread, format, and store PDFs. That’s not nothing, but it’s pretty close to nothing.

No one knows how much publishers make in return for providing these modest services, but we can guess. In 2017, the Association of Research Libraries surveyed its 123 member institutions and found they were paying a collective $1 billion in journal subscriptions every year. The ARL covers some of the biggest universities, but not nearly all of them, so let’s guess that number accounts for half of all university subscription spending. In 2023, the federal government estimated it paid nearly $380 million in article processing charges alone, and those are separate from subscriptions. So it wouldn’t be crazy if American universities were paying something like $2.5 billion to publishers every year, with the majority of that ultimately coming from taxpayers.

(By the way, the estimated profit margins for commercial scientific publishers are around 40%, which is higher than Microsoft.)

To put those costs in perspective: if the federal government cut out the publishers, it would probably save more money every year than it has “saved” in its recent attempts to cut off scientific funding to universities. It’s unclear how much money will ultimately be clawed back, as grants continue to get frozen, unfrozen, litigated, and negotiated. But right now, it seems like ~$1.4 billion in promised science funding is simply not going to be paid out. We could save more than that every year if we just stopped writing checks to John Wiley & Sons.

PUNK ROCK SCIENCE

How can such a scam continue to exist? In large part, it’s because of a computer hacker from Kazakhstan.

The political scientist James C. Scott once wrote that many systems only “work” because people disobey them. For instance, the Soviet Union attempted to impose agricultural regulations so strict that people would have starved if they followed the letter of the law. Instead, citizens grew and traded food in secret. This made it look like the regulations were successful, when in fact they were a sham.1

Something similar is happening right now in science, except Russia is on the opposite side of the story this time. In the early 2010s, a Kazakhstani computer programmer named Alexandra Elbakyan started downloading articles en masse and posting them publicly on a website called SciHub. The publishers sued her, so she’s hiding out in Russia, which protects her from extradition. As you can see in the map below, millions of people now use SciHub to access scientific articles, including lots of people who seem to work at universities:

This data is ten years old, so I would expect these numbers to be higher today. (source)

Why would researchers resort to piracy when they have legitimate access themselves? Maybe because journals’ interfaces are so clunky and annoying that it’s faster to go straight to SciHub. Or maybe it’s because those researchers don’t actually have access. Universities are always trying to save money by canceling journal subscriptions, so academics often have to rely on bootleg copies. Either way, SciHub seems to be our modern-day version of those Soviet secret gardens: for-profit publishing only “works” because people find ways to circumvent it.

Alexandra Elbakyan, “Pirate Queen of Science” (source)

In a punk rock kind of way, it’s kinda cool that so many American scientists can only do their work thanks to a database maintained by a Russia-backed fugitive. But it ought to be a huge embarrassment to the US government.2

Instead, for some reason, the government insists on siding with publishers against citizens. Sixteen years ago, the US had its own Elbakyan. His name was Aaron Swartz. He downloaded millions of paywalled journal articles using a connection at MIT, possibly intending to share them publicly. Government agents arrested him, charged him with wire fraud, and intended to fine him $1 million and imprison him for 35 years. Instead, he killed himself. He was 26.

Swartz with glasses, smiling with Jason Scott (cut off from the picture from the left)
Swartz in 2011, two years before his death (source)

THE FOREST FIRE IS OVERDUE

Scientists have tried to take on the middlemen themselves. They’ve founded open-access journals. They’ve published preprints. They’ve tried alternative ways of evaluating research. A few high-profile professors have publicly and dramatically sworn off all “luxury” outlets, and less-famous folks have followed suit: in 2012, over 10,000 researchers signed a pledge not to publish in any journals owned by Elsevier.

None of this has worked. The biggest for-profit publishers continue making more money year after year. “Diamond” open access journals—that is, publications that don’t charge authors or readers—only account for ~10% of all articles.3 Four years after that massive pledge, 38% of signers had broken their promise and published in an Elsevier journal.4

These efforts have fizzled because this isn’t a problem that can be solved by any individual, or even many individuals. Academia is so cutthroat that anyone who righteously gives up an advantage will be outcompeted by someone who has fewer scruples. What we have here is a collective action problem.

Fortunately, we have an organization that exists for the express purpose of solving collective action problems. It’s called the government. And as luck would have it, they’re also the one paying most of the bills!

So the solution here is straightforward: every government grant should stipulate that the research it supports can’t be published in a for-profit journal. That’s it! If the public paid for it, it shouldn’t be paywalled.

The Biden administration tried to do this, but they did it in a stupid way. They mandated that NIH-funded research papers have to be “open access”, which sounds like a solution, but it’s actually a psyop. By replacing subscription fees with “article processing charges”, publishers can simply make authors pay for writing instead of making readers pay for reading. The companies can keep skimming money off the system, and best of all, they get to call the result “open access”.

These fees can be wild. When my PhD advisor and I published one of our papers together, the journal charged us an “open access” fee of $12,000. This arrangement is a tiny bit better than the alternative, because at least everybody can read our paper now, including people who aren’t affiliated with a university. But those fees still have to come from somewhere, and whether you charge writers or readers, you’re ultimately charging the same account—namely, the US government.5

The Trump administration somehow found a way to make a stupid policy even stupider. They sped up the timeline while also firing a bunch of NIH staffers—exactly the people who would make sure that government-sponsored publications are, in fact, publicly accessible. And you need someone to check on that, because researchers are notoriously bad about this kind of stuff. They’re already required to upload the results of clinical trials to a public database, but more than half the time they just...don’t.

To do this right, you cannot allow the rent-seekers to rebrand. You have to cut them out entirely. I don’t think this will fix everything that’s wrong with science; it will merely fix the wrongest thing. Nonprofit journals still charge fees, but at least the money goes to organizations that ostensibly care about science, rather than going to CEOs who make $17 million a year. And almost every journal, for-profit or not, uses the same failed system of peer review. The biggest benefit of shaking things up, then, would be allowing different approaches to have a chance at life, the same way an occasional forest fire clears away the dead wood, opens up the pinecones, and gives seedlings a shot at the sunlight.

Science philanthropies should adopt the same policy, and some of them already have. The Navigation Fund, which oversees billions of dollars in scientific funding, no longer bankrolls journal publications at all. , its director, reports that the experiment has been a great success:

Our researchers began designing experiments differently from the start. They became more creative and collaborative. The goal shifted from telling polished stories to uncovering useful truths. All results had value, such as failed attempts, abandoned inquiries, or untested ideas, which we frequently release through Arcadia’s Icebox. The bar for utility went up, as proxies like impact factors disappeared.

Sounds good to me!

CATCH THE TIGER

Fifteen years ago, the open science movement was all about abolishing for-profit journals—that’s what open science meant. It seemed like every speech would end with “ELSEVIER DELENDA EST”.

Now people barely bring it up at all.6 It’s like a lion has escaped the zoo and it’s gulping down schoolchildren, but when people suggest zoo improvements, all the agenda items are like, “We should add another Dippin’ Dots kiosk”. If you bring up the loose tiger, everyone gets annoyed at you, like “Of course, no one likes the tiger”.

I think two things happened. First, we got cynical about cyberspace. In the 1990s and 2000s, we really thought the internet would solve most of our problems. When those problems persisted despite all of us getting broadband, we shifted to thinking that the internet was, in fact, causing the problems. And so it became cringe to think the internet could ever be a force for good. In 1995, for-profit publishers were going to be “the internet’s first victim”; in 2015, they were “the business the internet could not kill”.

Second, when the replication crisis hit in the early 2010s, the open science movement got a new villain—namely, naughty researchers. The fakers, the fraudsters, the over-claimers: those are the real bad boys of science. It’s no longer cool to hate international publishing conglomerates. Now it’s cool to hate your colleagues.

Both of these shifts were a shame. The internet utopians were right that the web would eliminate the need for journals, but they were wrong to think that would be enough. The replication police were right to call out scientific malfeasance, but they were wrong to forget our old foes. The for-profit publishers are just as bad as they ever were, and while the internet has made them more vulnerable then ever, now we know they won’t go unless they’re pushed.

If we want better science, we should catch the tiger. Not only because it’s bad for the tiger to be loose, but because it’s bad for us to look the other way. If you allow an outrageous scam to go unchecked, if you participate in it, normalize it—then what won’t you do? Why not also goose your stats a bit? Why not publish some junk research? Look around: no one cares!

There are so many problems with our current way of doing things, and most of those problems are complicated and difficult to solve. This one isn’t. Let’s heave this succubus off our scientific system and end this scam once and for all. After that, Dippin’ Dots all around.

Experimental History opposes the tiger and supports ice cream, in that order

1

Seeing Like a State, 203-204, 310

2

For anyone who is all-in on “America First”: may I also mention that three of the largest publishers—Springer Nature, Elsevier, and Taylor and Francis—are all British-owned. A curious choice of companies to subsidize!

3

Don’t get me started on this “diamond open access” designation. If it costs money to publish or to read, it’s not open access, period. “Oh, you’d like your car to come with a steering wheel and brakes? You’ll need our ‘diamond’ package.”

4

I assume this number is much higher now. At the time, Elsevier controlled 16% of the market, so most people could continuing publish in their usual journals without breaking their pledge. I started graduate school in 2016, and I never heard anyone mention avoiding Elsevier journals at all.

5

The NIH has announced vague plans to cap these charges, which is kind of like saying, “I’ll let you scam me, but just don’t go crazy about it.”

6

For example, the current strategic plan of the Center for Open Science doesn’t mention for-profit journals at all.

Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete

Is AI Already Killing People by Accident?

1 Share

This post first appeared on Marcus on AI, and is reposted with the permission of the author.

The writer Tyler Austin Harper (of The Atlantic, etc.) sent me a thread this morning, asking whether a mistargeting yesterday (February 28) that killed nearly 150 school children in Iran could have been the result of AI.

I can give only two intellectually honest answers:

The first is: I have no idea what happened yesterday, and probably I will never know. Secretary of Defense Pete Hegseth has made a very heavy bet on AI in the military, and it’s doubtful that he will be entirely forthcoming about this or other incidents to come. Targeting errors aren’t new. We may get little detail about AI’s role or non-role in incidents in the future.

Then again, as Harper notes, maybe this particular one is not a coincidence.

The second is: We can certainly expect incidents of this type, and more of them. Generative AI continues to have serious problems with reasoning and with visual cognition, as, for example, a series of studies by Anh Totti Nguyen has shown, a string of papers like these:

And of course generative AI still has problems with common sense, as an endless of stream of examples have shown, this one involving circulating on X today:

Meanwhile, unless and until the military does actual empirical studies on collateral damage, we won’t really know whether AI is helping or hurting. Mistargeting isn’t new, but using unreliable AI on vibes is fraught with peril.

(More broadly, the military use of AI needs to be a granular question. We might find, for example, that AI helps with logistics and planning but makes more errors in targeting. Mileage may vary according to the task, and may be worse in unfamiliar situations, given the inherent tendencies of generative AI.)

There is a second problem, a moral problem, that goes beyond the technical. The technical problem is that current AI simply isn’t reliable; mistakes will absolutely made. Some will cost lives, some will cost many lives. Some may lead to further escalation (a mass killing of school children could well do that); in the worst case, a series of escalations triggered by AI-triggered mistakes could lead to a nuclear war. Given the current status in the Middle East, this concern is not merely academic.

The moral problem is that militaries may well wish to use AI to cloak moral responsibility. One can, for example, use an AI tool to select targets and blame the AI. It is important to realize that real choices are made at the front end by those who use the AI. How many civilian casualties are acceptable? What error rate is permissible? AI can follow a set of criteria (with more or less precision depending on the quality of the algorithms and data), but humans set those criteria. In my own view, the biggest problem with the algorithms targeting Gaza was not necessarily the algorithms per se (about which not much may be public) but the decision to tolerate a large number of civilian casualties as part of the targeting.

By analogy, if one rolled dice (a physical instantiation of a very simple algorithm) to pick targets, one would not blame the dice for the deaths, but those who chose to leave life or death to chance in the first place.

We should absolutely want any AI that is used for war to be as precise and reliable as possible, minimizing casualties, but we should also never forget that those who use them are responsible for decisions about how many casualties are acceptable. And it should be incumbent on them to understand the limitations and inaccuracies of the algorithms they choose.

Whether current AI algorithms are precise (probably they aren’t), and whether humans are involved in the specific selection of targets, those who use military algorithms bear responsibility for the outcomes they produce.

The race to shove AI into everything is grossly premature, because the tech fundamentally lacks reliability.

Meanwhile, the chance that we will get straight answers is probably close to zero.

Altman, for his part, doesn’t seem to care, having signed a contract entirely full of holes, despite making noises about red lines.

I don’t want to say “this sucks,” but this situation really and truly sucks. Many people, perhaps thousands, maybe more, will die, needlessly.

Gary Marcus

Gary Marcus (@garymarcus), scientist, best-selling author, and entrepreneur, is deeply concerned about current AI, but really hoping we might do better. He spoke to the U.S. Senate on May 16, 2023 and is the co-author of the award-winning book Rebooting AI, as well as host of the new podcast Humans versus Machines.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

The Solution to the Male Loneliness Epidemic Is for Men to Bust Science Myths with Each Other

1 Share

Men, guys, dudes, rejoice! After much research and testing, we have found the cure to the cursed male loneliness epidemic that is sweeping our country and our op-ed sections. We know you feel isolated. We know you can’t talk about your emotions. We know you’re looking for male role models in all the wrong YouTube algorithms. But fear not. We have found the solution to all your problems: doing outlandish science projects to prove or disprove commonplace myths.

Men these days are reverting to masculine ideals from yesteryear. They think real men have to be strong, tough, and misogynistic. Listen, boys, you don’t need big muscles, you don’t need creatine powder, and you certainly don’t need to get surgery to gain an extra few inches of height because you’d rather have metal implants in your legs than be 5′4″. All you really need is a curious mind, a pure heart, and military-level access to high-powered explosives. And also a seemingly endless supply of crash-test dummies.

Where do men typically make friends? The gym. School. Work. These places can be great for building connections, but they can also reinforce harmful ideas about masculinity (just think of how much is shrugged off as “locker-room talk”). Doing bizarre and often comical science experiments with your friends is a way to avoid that toxic environment, and instead introduce men to a different kind of toxic environment, where, for example, they measure how long it would take a balloon filled with poison to spread its noxious air.

“But,” we hear you ask, “how do I know if any of my dude friends will even want to solve science mysteries with me?” Good news, they all do. We did multiple studies where we gathered men together and asked them things like, “So, how many times do you think you can fold a sheet of paper?” or “Do you think you’d be able to find a needle in a haystack?” and every time, each of the men wanted to try to do the thing immediately. Seriously. We had to pull some of the men away from the haystacks because they got so into it. Men are absolutely itching to solve little puzzles and then tell everyone about how they solved a little puzzle.

We hear your concerns about the manosphere. We hear your concerns about protein-powder consumption. We hear your concerns about men spending time doing mouth exercises to improve their jawlines. We cannot hear anything else because we have noise-canceling headphones on our ears while we try to see if we can light a match with a gun. This is one of the many experiments that male friends can do together instead of watching Andrew Tate videos or posting derogatory things on Sydney Sweeney’s Instagram stories.

Men, we know you feel lost. The world is full of unknowns, like Who am I? What is my purpose? How many Mentos would I need to drop into a bottle of Coke to bust open a door? Doing weird science experiments with your friends can answer at least one, if not more of these questions. And even when we finally get to the bottom of every science quandary there is, hope is still not lost. There’s always Jackass.

Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete
Next Page of Stories