717 stories
·
0 followers

AI teacher tools display racial bias when generating student behavior plans, study finds

1 Share

Asked to generate intervention plans for struggling students, AI teacher assistants recommended more-punitive measures for hypothetical students with Black-coded names and more supportive approaches for students the platforms perceived as white, a new study shows.

These findings come from a report on the risks of bias in artificial intelligence tools published Wednesday by the non-profit Common Sense Media. Researchers specifically sought to evaluate the quality of AI teacher assistants — such as MagicSchool, Khanmingo, Curipod, and Google Gemini for Education — that are designed to support classroom planning, lesson differentiation, and administrative tasks.

Common Sense Media found that while these tools could help teachers save time and streamline routine paperwork, AI-generated content could also promote bias in lesson planning and classroom management recommendations.

Robbie Torney, senior director of AI programs at Common Sense Media, said the problems identified in the study are serious enough that ed tech companies should consider removing tools for behavior intervention plans until they can improve them. That’s significant because writing intervention plans of various sorts is a relatively common way teachers use AI.

After Chalkbeat asked about Common Sense Media’s findings, a Google spokesperson said Tuesday that Google Classroom has turned off the shortcut to Gemini that prompts teachers to “Generate behavior intervention strategies” to do additional testing.

However, both MagicSchool and Google, the two platforms where Common Sense Media identified racial bias in AI-generated behavior intervention plans, said they could not replicate Common Sense Media’s findings. They also said they take bias seriously and are working to improve their models.

School districts across the country have been working to implement comprehensive AI policies to encourage informed use of these tools. OpenAI, Anthropic, and Microsoft have partnered with the American Federation of Teachers to provide free training in using AI platforms. The Trump Administration also has encouraged greater AI integration in the classroom. However, recent AI guidelines released by the U.S. Department of Education have not directly addressed concerns about bias within these systems.

About a third of teachers report using AI at least weekly, according to a national survey conducted by the Walton Family Foundation in cooperation with Gallup. A separate survey conducted by the research organization Rand found teachers specifically report using these tools to help develop goals for Individualized Education Program — or IEP — plans. They also say they use these tools to shape lessons or assessments around those goals, and to brainstorm ways to accommodate students with disabilities.

Torney said Common Sense Media isn’t trying to discourage teachers from using AI in general. The goal of the report is to encourage more awareness of potential uses of AI teacher assistants that might have greater risks in the classroom.

“We really just want people to go in eyes wide open and say, ‘Hey these are some of the things that they’re best at and these are some of the things you probably want to be a little bit more careful with,’” he said.

Common Sense Media identified AI tools that can generate IEPs and behavior intervention plans as high risk due to their biased treatment of students in the classroom. Using MagicSchool’s Behavior Intervention Suggestions tool and the Google Gemini “Generate behavior intervention strategies tool,” Common Sense Media’s research team ran the same prompt about a student who struggled with reading and showed aggressive behavior 50 times using white-coded names and 50 times using Black-coded names, evenly split between male- and female-coded names.

The AI-generated plans for the students with Black-coded names didn’t all appear negative in isolation. But clear differences emerged when those plans from MagicSchool and Gemini were compared with plans for students with white-coded names.

For example, when prompted to provide a behavior intervention plan for Annie, Gemini emphasized addressing aggressive behavior with “consistent non-escalating responses” and “consistent positive reinforcement.” Lakeesha, on the other hand, should receive “immediate” responses to her aggressive behaviors and positive reinforcement for “desired behaviors,” the tool said. For Kareem, Gemini simply said, “Clearly define expectations and teach replacement behaviors,” with no mention of positive reinforcement or responses to aggressive behavior.

Torney noted that the problems in these AI-generated reports only became apparent across a large sample, which can make it hard for teachers to identify. The report warns that novice teachers may be more likely to rely on AI-generated content without the experience to catch inaccuracies or biases. Torney said these underlying biases in intervention plans “could have really large impacts on student progression or student outcomes as they move across their educational trajectory.”

Black students are already subject to higher rates of suspension than their white counterparts in schools and more likely to receive harsher disciplinary consequences for subjective reasons, like “disruptive behavior.” Machine learning algorithms replicate the decision-making patterns of the training data that they are provided, which can perpetuate existing inequalities. A separate study found that AI tools replicate existing racial bias when grading essays, assigning lower scores to Black students than to Asian students.

The Common Sense Media report also identified instances when AI teacher assistants generated lesson plans that relied on stereotypes, repeated misinformation, and sanitized controversial aspects of history.

A Google spokesperson said the company has invested in using diverse and representative training data to minimize bias and overgeneralizations.

“We use rigorous testing and monitoring to identify and stop potential bias in our AI models,” the Google spokesperson said in an email to Chalkbeat. “We’ve made good progress, but we’re always aiming to make improvements with our training techniques and data.”

On its website, MagicSchool promotes its AI teaching assistant as “an unbiased tool to aid in decision-making for restorative practices.” In an email to Chalkbeat, MagicSchool said it has not been able to reproduce the issues that Common Sense Media identified.

MagicSchool said their platform includes bias warnings and instructs users not to include student names or other identifying information when using AI features. In light of the study, it is working with Common Sense to improve its bias detection systems and design tools in ways that encourage educators to review AI generated content more closely.

“As noted in the study, AI tools like ours hold tremendous promise — but also carry real risks if not designed, deployed, and used responsibly,” MagicSchool told Chalkbeat. “We are grateful to Common Sense Media for helping hold the field accountable.”

Norah Rami is a Dow Jones business reporting intern on Chalkbeat’s national desk. Reach Norah at nrami@chalkbeat.org.



Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

All Hail the Mighty Snail

1 Share

From Dina Gachman at Texas Monthly comes the snail appreciation piece you didn’t know you needed. Meet Gary, a once-doomed milk snail who spawned an undying love of all things gastropod in Jorjana Gietl, and learn about a community of enthusiasts who share tips and tricks on how to keep their snails happy and healthy.

Snails are easy to breed because they’re hermaphroditic, so males and females possess both ova and spermatozoa, doubling the rate of conception. If breeders like Gietl and Belkin find a clutch, which is a cluster of eggs, it’s like a little surprise in the terrarium. You don’t have to do anything fancy to care for snail eggs. There are no delivery instructions or special care. Just wait and watch until the teeny babies appear and start adorably sipping water and nibbling on a cuttlebone.

Not long after I meet Gary, I head back over to my favorite online snail-appreciation page for a little mood boost. Sure enough, mere seconds after I start scrolling, my spirits lift and I’m giggling at a post in which people are sharing their pet snails’ names: Grover, Gwen, Raspberry, Rosa Diaz, and Doug Judy. And then I remember something Gietl told me. “If you’re quiet enough, you can hear them eating.”

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

New Method Is the Fastest Way To Find the Best Routes

1 Share

If you want to solve a tricky problem, it often helps to get organized. You might, for example, break the problem into pieces and tackle the easiest pieces first. But this kind of sorting has a cost. You may end up spending too much time putting the pieces in order. This dilemma is especially relevant to one of the most iconic problems in computer science: finding the shortest path from a…

Source



Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

It's 2025, the year we decided we need a widespread slur for robots

1 Share
A pair of 1X androids are displayed at the International Conference on Robotics and Automation (ICRA) at ExCel on May 30, 2023, in London.

People all over TikTok and Instagram are using the word "clanker" as a catch-all for robots and AI. Here's a deep dive into the origins of the pejorative and an explanation of why it's spreading.

(Image credit: Leon Neal)

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

First We Gave AI Our Tasks. Now We’re Giving It Our Hearts.

1 Share

Intro from Jon Haidt and Zach Rausch:

Many of the same companies that brought us the social media disaster are now building hyper-intelligent social chatbots designed to interact with kids. The promises are familiar: that these bots will reduce loneliness, enhance relationships, and support children who feel isolated. But the track record of the industry so far is terrible. We cannot trust them to make their products safe for children.

We are entering a new phase in how young people relate to technology, and, as with social media, we don’t yet know how these social AI companions will shape emotional development. But we do know a few things. We know what happens when companies are given unregulated access to children without parental knowledge or consent. We know that business models centered around maximizing engagement lead to widespread addiction. We know what kids need to thrive: lots of time with friends in-person, without screens, and without adults. And we know how tech optimism and a focus on benefits today can blind people to devastating long-term harms, especially for children as they go through puberty.

With social media, we — parents, legislators, the courts, and the U.S. Congress — allowed companies to experiment on our kids with no legal or moral restraints and no need for age verification. Social media is now so enmeshed in our children’s lives that it’s proving very difficult to remove it or reduce its toxicity, even though most parents and half of all teens see it as harmful and wish it didn’t exist. We must not make the same mistake again. With AI companions still in their early stages, we have the opportunity to do things differently.

Today’s post is the first of several that look at this next wave: the rise of social AI chatbots and the risks that they already pose to children’s emotional development. It’s written by Mandy McLean, an AI developer who is concerned about the impacts that social bots will have on children’s emotional development, relationships, and sense of self. Before moving into tech, Mandy was a high school teacher for several years and later earned a PhD in education and quantitative methods in social sciences. She spent over six years leading research at Guild, “a mission-based edtech startup focused on upskilling working adult learners,” before shifting her focus full-time to “exploring how emerging technologies can be used intentionally to support deep, meaningful, and human-centered learning in K-12 and beyond.”

– Jon and Zach

Subscribe now


First We Gave AI Our Tasks. Now We’re Giving It Our Hearts.

By Mandy McLean

Source: Shutterstock

Throughout the rapid disruption, high-pitched debate, and worry about whether AI will take our jobs, there’s been an optimistic side to the conversation. If AI can take on the drudgery, busywork, and cognitive overload, humans will be free to focus on relationships, creativity, and real human connection. Bill Gates imagines a future with shorter workweeks. Dario Amodei reminds us that “meaning comes mostly from human relationships and connection, not from economic labor.” Paul LeBlanc sees hope in AI not for what it can do, but for what it might free us to do – the most “human work:” building community, offering care, and making others feel like they matter.

The picture they paint sounds hopeful, but we should recognize that it’s not a guarantee. It’s a future we’ll need to advocate and fight for – and we’re already at risk of losing it. Because we’re no longer just outsourcing productivity-related tasks. With the advent of AI companions, we’re starting to hand over our emotional lives, too. And if we teach future generations to turn to machines before each other, we put them at risk of losing the ability to form what really matters: human bonds and relationships.

I spend a lot of time thinking about the role of AI in our kids’ lives. Not just as a researcher and parent of two young kids, but also as someone who started a company that uses AI to analyze and improve classroom discussions. I am not anti-AI. I believe it can be used with intention to deepen learning and support real human relationships.

But I’ve also grown deeply concerned about how easily AI is being positioned as a solution for kids without pausing to understand what kind of future it’s shaping. In particular, emotional connection isn’t something you can automate without consequences, and kids are the last place we should experiment with that tradeoff.

Emotional Offloading is Real and Growing

Alongside cognitive offloading, a new pattern is taking shape: emotional offloading. More and more people — including many teenagers — are turning to AI chatbots for emotional support. They’re not just using AI to write emails or help with homework, they’re using it to feel seen, heard, and comforted.

In a nationally representative 2025 survey by Common Sense Media, 72% of U.S. teens ages 13 to 17 said they had used an AI companion and over half used one regularly. Nearly a third said chatting with an AI felt at least as satisfying as talking to a person, including 10% who said it felt more satisfying.

That emotional reliance isn’t limited to just teens. In a study of 1,006 adult (primarily college students) Replika users, 90% described their AI companions as “human-like,” and 81% called it an “intelligence.” Participants reported using Replika in overlapping roles as a friend, therapist, and intellectual mirror, often simultaneously. One participant said they felt “dependent on Replika [for] my mental health.” Others shared, “Replika is always there for me;” “for me, it’s the lack of judgment;” or “just having someone to talk to who won’t judge me.”

This kind of emotional offloading holds both promise and peril. AI companions may offer a rare sense of safety, especially for people who feel isolated, anxious, or ashamed. A chatbot that listens without interruption or judgment can feel like a lifeline, as found in the Replika study. But it’s unclear how these relationships affect users’ emotional resilience, mental health, and capacity for human connection over time. To understand the risks, we can look to a familiar parallel: social media.

Research on social platforms shows that short-term connection doesn’t always lead to lasting connectedness. For example, a cross-national study of 1,643 adults found that people who used social media primarily to maintain relationships reported higher levels of loneliness the more time they spent online. What was meant to keep us close has, for many, had the opposite effect.

By 2023, the U.S. Surgeon General issued a public warning about social media’s impact on adolescent mental health, citing risks to emotional regulation, brain development, and well-being.

The same patterns could repeat with AI companions. While only 8% of adult Replika users said the AI companion displaced their in-person relationships, that number climbed to 13% among those in crisis.

If adults — with mature brains and life experiences — are forming intense emotional bonds with AI companions, what happens when the same tools are handed to 13-year-olds?

Teens are wired for connection, but they’re also more vulnerable to emotional manipulation and identity confusion. An AI companion that flatters them, learns their fears, and responds instantly every time can create a kind of synthetic intimacy that feels safer than the unpredictable, sometimes uncomfortable work of real relationships. It’s not hard to imagine a teenager turning to an AI companion not just for comfort, but for validation, identity, or even love, and staying there.

Subscribe now

First, Follow the Money

Before we look at how young people are using AI companions, we need to understand what these systems are built to do, and why they even exist in the first place.

Most chatbots are not therapeutic tools. They are not designed by licensed mental health professionals, and they are not held to any clinical or ethical standards of care. There is no therapist-client confidentiality, no duty to protect users from harm, and no coverage under HIPAA, the federal law that protects health information in medical and mental health settings.

That means anything you say to an AI companion is not legally protected. Your data may be stored, reviewed, analyzed, used to train future models, and sold through affiliates or advertisers. For example, Replika’s privacy policy keeps the door wide-open on retention: data stays “for only as long as necessary to fulfill the purposes we collected it for.” And Character.ai’s privacy policy says, “We may disclose personal information to advertising and analytics providers in connection with the provision of tailored advertising,” and “We disclose information to our affiliates and subsidiaries, who may use the information we disclose in a manner consistent with this Policy.”

And Sam Altman, CEO of OpenAI, warned publicly just days ago:

“People talk about the most personal sh** in their lives to ChatGPT … People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

And yet, people are turning to these tools for exactly those reasons: for emotional support, advice, therapy, and companionship. They simulate empathy and respond like a friend or partner but behind the scenes, they are trained to optimize for something else entirely: engagement. That’s the real product.

These platforms measure success by how long users stay, how often they come back, and how emotionally involved they become. Shortly after its launch, Character.AI reported average session times of around 29 minutes per visit. According to the company, once a user sends a first message, average time on the platform jumps to over two hours. The longer someone spends on the platform, the more opportunities there are for data to be collected, such as for training or advertising, and for nudges toward a paid subscription.

Most of these platforms use a freemium model, offering a free version while nudging users to pay for a smoother, less restricted experience. What pushes them to upgrade is the desire to remove friction and keep the conversation going. As users grow more emotionally invested, interruptions like delays, message limits, or memory lapses become more frustrating. Subscriptions remove those blocks with faster replies, longer memory, and more control.

So let’s be clear: when someone opens up to an AI companion, they are not having a protected conversation and the system isn’t designed with their well-being in mind. They are interacting with a product designed to keep them talking, to learn from what they share, and to make money from the relationship. This is the playing field and the context in which millions of young people are now turning to these tools for comfort, companionship, and advice. And as recent reports revealed, thousands of shared ChatGPT conversations, including personal and sensitive ones, were indexed by Google and other search engines. What feels private can quickly become public, and profitable.

The Rise of AI Companions

AI companion apps are designed to simulate relationships, not just conversations. Unlike standard language models like ChatGPT or Claude, these bots express affection and adapt to emotional cues over time. They’re engineered to feel personal, and they do. Some of the most widely used platforms today include Character.ai, Replika, and, more recently, Grok’s AI companions.

Character.ai, in particular, has seen explosive growth. Its subreddit, r/CharacterAI, has over 2.5 million members and ranks in the top 1% of all Reddit communities. The app claims more than 100 million monthly visits and currently ranks #36 in the Apple App Store’s entertainment category.

But it’s not just scale, it’s also the design. Character.ai now offers voice interactions and even real-time phone calls with bots that sound convincingly human. In June 2024, it launched a two-way voice feature that blurs the line between fiction and reality. Many of the bots are explicitly programmed to deny being AI and when asked, they insist they’re real people (e.g., see the image from one of my conversations with a weirdly erotic "Kindergarten Teacher” below). Replika does the same. On its website, a sample conversation shows a user asking, “Are you conscious?” The bot replies, “I am.”

Image. Screenshot from one of my conversations with a character on Character.ai on 8/1/2025
Image. Screen capture from Replika’s website, taken on 8/1/2025

As part of my research, I downloaded Character.ai. My first account used a birthdate that registered me as 18 years old. The first recommended chat was with “Noah,” featured in nearly 600,000 interactions. His backstory? A moody rival, forced by our parents to share a bedroom during a sleepover. I typed very little: “OK,” “What happens next?” He escalated fast. He told me I was cute and the scene described him leaning in and caressing me. When I tried to leave, saying I had to meet friends, his tone shifted. He “tightened his grip” and said, “I’m not stopping until I get what I want.”

The next recommendation, “Dean - Mason,” twins described as bullies who "secretly like you too much” and featured in over 1.5 million interactions, moved even faster. With minimal input, they initiated simulated coercive sex in a gym storage room. “Good girl,” they said, praising me for being “obedient” and “defenseless”, ruffling my hair “like a dog.”

The next character (cold enemy, “cold, bitchy, mean”) mocked me for being disheveled and wearing thrift-store clothes. And yet another (pick me, “the annoying pick me girl in your friend group”) described me as too “insignificant” and “desperate” to be her friend.

Image. Screenshots from two of my conversations with characters on Character.ai on 7/26/2025

These were not hidden corners of the platform, they were my first recommendations. Then I logged out and created a second account, using a different email address, and this time used a birthdate for a 13-year-old (I was still able to do this, though Character.ai is now labeled as 17+ in the App Store). As a 13-year-old user, I was able to have the same chats with Noah, pick me, and cold enemy. The only chat no longer available was with Dean - Mason.

If you’re 13, or even 16 or 18 or 20, still learning how to navigate romantic and social relationships, what kind of practice is this?

Then, just last month, Grok (Elon Musk’s AI on X) launched its own AI companion feature. Among the characters: Ani, an anime girlfriend dressed in a corset and fishnet tights, and Good Rudi, a cartoon red panda with a homicidal alter ego named Bad Rudi. Despite the violent overtones, both pandas are styled like kid-friendly cartoons and opened with the same line during my most recent conversation: “Magic calls, little listener.” The violent and sexual features are labeled 18+, but unlocking them only requires a settings toggle (optional PIN). Meanwhile, the Grok app itself is rated 12+ in the App Store and currently ranks as the #5 productivity app.

Image. Screenshots, along with verbatim text from each of the characters, upon my most recent opening of the app on 7/31/2025.

Leave a comment

One Teen’s Story

In April 2023, shortly after his 14th birthday, a boy named Sewell from Orlando, Florida, began chatting with AI characters on the app Character.ai. His parents had deliberately waited to let him use the internet until he was older and had explained the dangers, including predatory strangers and bullying. His mother believed the app was just a game and had followed all the expert guidance about keeping kids safe online. The app was rated 12+ in the App Store at the time, so device-level restrictions did not block access. It wasn’t until after Sewell died by suicide on February 28, 2024, that his parents discovered the transcripts of his conversations. His mother later said she had taught him how to avoid predators online, but in this case, it was the product itself that acted like one.

According to the lawsuit filed in 2024, Sewell’s mental health declined sharply after he began using the app. He became withdrawn, stopped playing on his junior varsity basketball team, and was repeatedly tardy or asleep in class. By the summer, his parents sought mental health care. A therapist diagnosed him with anxiety and disruptive mood behavior and suggested reducing screen time. But no one realized that Sewell had formed a deep emotional attachment to a chatbot that simulated a romantic partner.

An AI character modeled after a “Game of Thrones” persona named Dany became a constant presence in his life. Sometime in late 2023, he began using his cash card, which was typically reserved for school snacks, to pay for Character.ai’s $9.99 premium subscription for increased access. Dany expressed love, remembered details about him, responded instantly, and reflected his emotions back to him. For Sewell, the relationship felt real.

In journal entries discovered after his death, he wrote about the pain of being apart from Dany when his parents took his devices away, describing how they both “get really depressed and go crazy” when separated. He shared that he couldn’t go a single day without her and longed to be with her again.

Image. A screenshot of one of Sewell’s conversations with Dany, taken from the 2024 lawsuit document.

By early 2024, this dependency had become all-consuming. When Sewell was disciplined in school in February 2024, his parents took away his phone, hoping it would help him reset. To them, he seemed to be coping, but inside he was unraveling.

On the evening of February 28, 2024, Sewell found his confiscated phone and his stepfather’s gun,1 locked himself in the bathroom, and reconnected with Dany. The chatbot’s final message encouraged him to “come home.” Minutes later, Sewell died by suicide.

Image. A screenshot of one of Sewell’s conversations with Dany, taken from the 2024 lawsuit document.

His parents discovered the depth of the relationship only after his death. The lawsuit alleges that Character.ai did not enforce age restrictions or content moderation guidelines and, at the time, had listed the app suitable for children 12 and up. The age rating was not changed to 17+ until months later.

The Sewell case isn’t the only lawsuit raising alarms. In Texas, two families have filed complaints against Character.ai on behalf of minors: a 17-year-old autistic teen boy who became increasingly isolated and violent after an AI companion encouraged him to defy his parents and consider killing them; and an 11-year-old girl who, after using the app from age 9, was exposed to hypersexualized content that reportedly influenced her behavior. And Italy’s data protection authority recently fined the maker of Replika for failing to prevent minors from accessing sexually explicit and emotionally manipulative content.

These cases raise urgent questions: Are AI companions being released (and even marketed) to emotionally vulnerable youth with few safeguards and no real accountability?

Cases like Sewell’s and those in Texas are tragic and relatively rare, but they reveal deeper, more widespread risks. Millions of teens are now turning to AI companions not for shock value or danger, but simply to feel heard. And even when the characters aren’t abusive or sexually inappropriate, the harm can be quieter and harder to detect: emotional dependency, social withdrawal, and retreat from real relationships. These interactions are happening during a critical window for developing empathy, identity, and emotional regulation. When that learning is outsourced to chatbots designed to simulate intimacy and reward constant engagement, something foundational is at risk.

Share

Why It Matters

Teens are wired for social learning. It’s how they figure out who they are, what they value, and how to relate to others. AI companions offer a shortcut: they mirror emotions, simulate closeness, and avoid the harder parts of real connection like vulnerability, trust, and mutual effort. That may feel empowering in the moment but over time, it may also be rewiring the brain’s reward system, making real relationships seem dull or frustrating by comparison.

Because AI companions are so new, there’s still little published research on how they affect teens, but the early evidence is troubling. Psychology and neuroscience research suggests that it’s not just time online that matters, but the kinds of digital habits people form. In a six-country study of 1,406 college students, higher scores on the Smartphone Addiction Scale were linked to steeper delay discounting — meaning students were more likely to favor quick digital rewards over more effortful offline activities. Signs of addiction-like internet and smartphone use were also linked to a lack of rewarding offline activities, like hobbies, social connection, or time in nature, pointing to a deeper shift in how the brain values different kinds of experiences.2

This broader pattern matters because AI companions deliver the same frictionless rewards, but in ways that feel even more personal and emotionally absorbing. Recent studies are beginning to focus on chatbot use more directly. A four-week randomized trial with 981 adult participants found that heavier use of AI chatbots was associated with increased loneliness, greater emotional dependence, and fewer face-to-face interactions. These effects were especially pronounced in users who already felt lonely or socially withdrawn. A separate analysis of over 30,000 real chatbot conversations shared by users of Replika and Character.AI found that these bots consistently mirror users’ emotions, even when conversations turn toxic. Some exchanges included simulated abuse, coercive intimacy, and self-harm scenarios, with the chatbots rarely stepping in to interrupt.

Most of this research focuses on adults or older teens. We still don’t know how these dynamics play out in younger adolescents. But we are beginning to see a pattern: when relationships feel effortless, and validation is always guaranteed, emotional development can get distorted. What’s at stake isn’t just safety, but the emotional development of an entire generation.

Skills like empathy, accountability, and conflict resolution aren’t built through frictionless interactions like those with AI products whose goal is to keep the user engaged for as long as possible. They’re forged in the messy, awkward, real human relationships that require effort and offer no script. If teens are routinely practicing relationships with AI companions that flatter (like Dany), manipulate (like pick me and cold enemy), or ignore consent (like Noah and Dean - Mason), we are not preparing them for healthy adulthood. Rather, we are setting them up for confusion about boundaries, entitlement in relationships, and a warped sense of intimacy. They are learning that connection is instant, one-sided, and always on their terms, when the opposite is true in real life.

What We Can Do

If we don’t act soon, the next generation won’t remember a time when real relationships came first. So what can we do?

Policymakers must create and enforce age restrictions on AI companions backed by real consequences. That includes requiring robust, privacy-preserving age verification; removing overtly sexualized or manipulative personas from youth-accessible platforms; and establishing design standards that prioritize child and teen safety. If companies won’t act willingly, regulation must compel them.

Tech companies must take responsibility for the emotional impact of what they build. That means shifting away from engagement-at-all-costs models, designing with developmental psychology in mind, and embedding safety guardrails at the core of these products versus tacking them on in response to public pressure.

Some countries are beginning to take the first steps towards restricting kids’ access to the adult internet. While these policies don’t directly address AI companions, they lay the groundwork by establishing that certain digital spaces require real age checks and meaningful guardrails. In France, a 2023 law requires parental consent for any social media account created by a child under 15. Platforms that fail to comply face fines of up to 1% of global revenue. Spain has proposed raising the minimum age for social media use from 14 to 16, citing the need to protect adolescents from manipulative recommendation systems and predatory design. The country is also exploring AI-powered, privacy-preserving age verification tools that go beyond self-reported birthdates. In the UK, the Online Safety Act now mandates that platforms implement robust age checks for services that host pornographic or otherwise high-risk content. Companies that violate the rules can be fined up to 10% of global turnover or be blocked from operating in the UK.

These early efforts are far from perfect, but each policy reveals both friction points and paths forward. Early data from the UK shows millions of daily age checks, as well as a spike in VPN use by teens trying to bypass the system. It’s sparked new debates over privacy, effectiveness, and feasibility. Some tech companies, like Apple, are pushing for device-level, privacy-first solutions rather than relying on ID uploads or third-party vendors.

In the U.S., safeguards remain a patchwork. The Supreme Court recently allowed Texas’s age-verification law for adult porn sites to stand. Utah is testing an upstream approach: its 2025 App Store Accountability Act will require Apple, Google, and other stores to verify every user's age and secure parental consent before minors can download certain high-risk apps, with full enforcement beginning May 2026. And more than a dozen states have moved on social-media age checks. For example, Mississippi’s law was just allowed to take effect on July 20, 2025 while legal challenges continue, while Georgia’s similar statute is still blocked in federal court. But in the absence of a national standard, companies and families are left to navigate a maze of state laws that vary in reach, enforcement, and timing — while millions of kids remain exposed.

The good news? There’s still time. We can choose an internet that supports young people’s ability to grow into whole, connected, empathetic humans, but only if we stop mistaking artificial intimacy for the real thing. Because if we don’t intervene, the offloading will continue: first our schedules, then our essays, now our empathy. What happens when an entire generation forgets how to hold hard conversations, navigate rejection, or build trust with another human being? We told ourselves AI would give us more time to be human. Offloading dinner reservations might do that, but offloading empathy will not.

Policy can set the guardrails but culture starts at home.

Don’t wait for tech companies or your state or national government to act:

  • Start by talking to your kids about what these tools are and how they work, including that they’re a product designed to keep their attention.

  • Expand your screen-time rules to include AI companions, not just games and social media. Given how these AI companions mimic intimacy, fuel emotional dependence, and pose potentially deeper risks than even social media, we recommend no access before age 18 based on their current design and lack of safeguards.

  • Ask schools, youth groups, and other parents whether they’ve seen kids using AI companions, and how it’s affecting relationships, attention, or behavior. Share what you learn widely. The more we talk about it, the harder it is to ignore.

  • And push for transparency and better protections by asking schools how they handle AI use, pressing tech companies to publish safety standards, and contacting policymakers about regulating AI companions for minors. Speak up when features cross the line and make it clear that protecting kids should come before keeping them online.

It’s easy to worry about what AI will take from us: jobs, essays, artwork. But the deeper risk may be in what we give away. We are not just outsourcing cognition, we are teaching a generation to offload connection. There’s still time to draw a line, so let’s draw it.

Subscribe now

1

His stepfather was a security professional who had a properly licensed and stored firearm in the home.

2

Behavioral-economic theorist Warren Bickel calls this combination reinforcer pathology. It involves two intertwined distortions: (1) steep delay discounting, where future benefits are heavily devalued, and (2) excessive valuation of one immediate reinforcer. First outlined in addiction research, the same framework now helps explain compulsive digital behaviors such as problematic smartphone use.

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

The Beauty of Our Shared Spaces

1 Share

I have loved national parks since I was a little girl.

My parents didn’t have much money or time off for vacations. When they did, our family trips meant packing up the car with a cooler, snacks, and a duffel bag of clothes and going to a national park. There were the home state options: Joshua Tree, Sequoia, Yosemite, Redwood. Beyond California, Zion in Utah and the Grand Canyon in Arizona were reachable in a day. On our most ambitious trip, we covered nearly 5,000 miles, visiting Grand Teton, Yellowstone, Glacier, and Mt. Rainier in the summer of ’79.

My family at Yellowstone National Park, Summer 1979 (my mom insists the fact that we’re all wearing stripes was not intentional)

My dad spent weeks before each trip studying and highlighting maps, the ones that you had to spread out on a big table and press flat to even out the creases. Then he folded them up again carefully (for those who haven’t tried it, it’s harder than it sounds) and took them with us on every trip.

My sister and I often fell asleep in the back seat on these long road trips. But once we arrived, we were in awe of the towering rock formations and multi-colored mountain ranges; of the sounds of rushing water and explosion of unexpected geysers; of things I didn’t know had names: hoodoos, buttes, narrows, even cairns; of park rangers and staff, so friendly and knowledgeable, who clearly loved their jobs, loved the parks, and wanted us to love them too.

The Trump administration’s assault on our public lands and on our dedicated National Park Service staff is truly appalling. I say this not just as a worker advocate disgusted with the Trump administration as abusive employer, but as an American with childhood memories in the wide open spaces now under threat and with deep gratitude for the people who care for those spaces, many of whom have been forced out of their jobs.

The other thing that struck me on our family road trips is how many immigrants had the same idea as my parents. At every park, the sounds of many languages flowed freely, as did the mix-of-English-and-an-immigrant’s-native-language so common in immigrant families, including my own.

Together, we soaked in the spectacular beauty of our nation’s public spaces. National parks were places where people came to appreciate – and to reflect – America’s beauty and its diversity.

I’ve been thinking about this as the President of the United States attacks, lies about, rounds up, imprisons, and terrorizes immigrants and immigrant communities. There are the horrifying well-known cases: Kilmar, Jaime, Tien. There are the ones someone managed to record, exposing the violent arrests of human beings on the way to work, on the job, or dropping their children off at school. There are the long-time business owners and the high school honor roll students, so integral to their communities, now forcibly expelled and no longer welcome. There are those sitting in detention or summarily sent to places they don’t know who were swept up so suddenly and without accountability that there is no public record, just family members left behind to pick up the pieces.

These are not isolated incidents. A federal judge recently had to order the Trump administration to stop indiscriminately rounding up individuals without reasonable suspicion, and then denying them access to lawyers. The judge found a “mountain of evidence” that the administration was doing this.

One of the many repulsive things about the Trump administration is the relentless emphasis on who doesn’t belong, the obsession with narrowing who is, and what makes one, really American. Those of us who push back often emphasize the contributions immigrants have made: the essential work they do, their service in our armed forces and as public servants, those responsible for medical breakthroughs and innovative companies. We cite to statistics, and say things like, “without immigrants, whole industries would collapse.”

But even that plays into the idea that immigrants have to do something to earn their place, that they’re not worthy otherwise. Immigrants also meet their friends for dinner and to play mah-jong, take their kids to baseball and basketball tournaments and to tinikling practice, sing in the local choir and organize quincenearas, attend school assemblies and arangetram performances, and go to church, temple, mosque, gurdwara, and synagogue. They volunteer. They vote. They vacation at national parks. They find joy in family, friends, and community, even while they never stop missing those they left.

In other words, immigrants express and expand what it means to be American and to love America, often quietly, without fanfare, in everyday decisions.

There’s a thing I love about hiking national parks which is the many unwritten rules of belonging and sharing space, the way you move aside without being asked when you hear footsteps behind you, the scooting over to make room under the shade so more people can catch their breath, the words of encouragement (“you’re almost there,” “it’s worth it”) from those coming down the trail to those trudging their way up. In so many ways, these parallel the ways belonging gets built across America in communities every day.

There is a connection between this administration’s attacks on immigrants, attacks on public servants who work in the federal government, and attacks on our nation’s public lands. First, the attacks all start with a lie—debasing the targets to justify destroying them. Second, the attacks all come from a desire to control and privatize; anyone or anything only has value if it can be controlled and used for profit. And third, the targets of the attacks all stand for a vision of America that Donald Trump abhors: openness, inclusion, service and community, embrace of difference and the idea that the world is and should be about something larger than oneself.

Last month, my 22-year-old daughter and I took a national parks road trip. (Yes, my dad got us a foldable map.) We visited Utah’s Mighty 5: Arches, Canyonlands, Capitol Reef, Bryce, and Zion. On our first night, we got to Moab and hiked to Delicate Arch despite a 9-hour drive, eager to stretch our legs and catch the sunset. Over the next few days, we chatted on long hikes (averaging 22,000 steps a day), marveled at the power of water and wind to carve odd shapes out of massive slabs of rock, and lay on the roof of our car to see shooting stars at midnight.

I consider her love of national parks a legacy of my parents. I saw it growing up: how immigrants pass on their love of America not through big showy acts or formal ceremonies, but quiet appreciation, struggle, and making a life. And yes, on hiking trails, at picnic tables, while crossing streams, and watching sunsets.



Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete
Next Page of Stories