716 stories
·
0 followers

All Hail the Mighty Snail

1 Share

From Dina Gachman at Texas Monthly comes the snail appreciation piece you didn’t know you needed. Meet Gary, a once-doomed milk snail who spawned an undying love of all things gastropod in Jorjana Gietl, and learn about a community of enthusiasts who share tips and tricks on how to keep their snails happy and healthy.

Snails are easy to breed because they’re hermaphroditic, so males and females possess both ova and spermatozoa, doubling the rate of conception. If breeders like Gietl and Belkin find a clutch, which is a cluster of eggs, it’s like a little surprise in the terrarium. You don’t have to do anything fancy to care for snail eggs. There are no delivery instructions or special care. Just wait and watch until the teeny babies appear and start adorably sipping water and nibbling on a cuttlebone.

Not long after I meet Gary, I head back over to my favorite online snail-appreciation page for a little mood boost. Sure enough, mere seconds after I start scrolling, my spirits lift and I’m giggling at a post in which people are sharing their pet snails’ names: Grover, Gwen, Raspberry, Rosa Diaz, and Doug Judy. And then I remember something Gietl told me. “If you’re quiet enough, you can hear them eating.”

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

New Method Is the Fastest Way To Find the Best Routes

1 Share

If you want to solve a tricky problem, it often helps to get organized. You might, for example, break the problem into pieces and tackle the easiest pieces first. But this kind of sorting has a cost. You may end up spending too much time putting the pieces in order. This dilemma is especially relevant to one of the most iconic problems in computer science: finding the shortest path from a…

Source



Read the whole story
mrmarchant
8 hours ago
reply
Share this story
Delete

It's 2025, the year we decided we need a widespread slur for robots

1 Share
A pair of 1X androids are displayed at the International Conference on Robotics and Automation (ICRA) at ExCel on May 30, 2023, in London.

People all over TikTok and Instagram are using the word "clanker" as a catch-all for robots and AI. Here's a deep dive into the origins of the pejorative and an explanation of why it's spreading.

(Image credit: Leon Neal)

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

First We Gave AI Our Tasks. Now We’re Giving It Our Hearts.

1 Share

Intro from Jon Haidt and Zach Rausch:

Many of the same companies that brought us the social media disaster are now building hyper-intelligent social chatbots designed to interact with kids. The promises are familiar: that these bots will reduce loneliness, enhance relationships, and support children who feel isolated. But the track record of the industry so far is terrible. We cannot trust them to make their products safe for children.

We are entering a new phase in how young people relate to technology, and, as with social media, we don’t yet know how these social AI companions will shape emotional development. But we do know a few things. We know what happens when companies are given unregulated access to children without parental knowledge or consent. We know that business models centered around maximizing engagement lead to widespread addiction. We know what kids need to thrive: lots of time with friends in-person, without screens, and without adults. And we know how tech optimism and a focus on benefits today can blind people to devastating long-term harms, especially for children as they go through puberty.

With social media, we — parents, legislators, the courts, and the U.S. Congress — allowed companies to experiment on our kids with no legal or moral restraints and no need for age verification. Social media is now so enmeshed in our children’s lives that it’s proving very difficult to remove it or reduce its toxicity, even though most parents and half of all teens see it as harmful and wish it didn’t exist. We must not make the same mistake again. With AI companions still in their early stages, we have the opportunity to do things differently.

Today’s post is the first of several that look at this next wave: the rise of social AI chatbots and the risks that they already pose to children’s emotional development. It’s written by Mandy McLean, an AI developer who is concerned about the impacts that social bots will have on children’s emotional development, relationships, and sense of self. Before moving into tech, Mandy was a high school teacher for several years and later earned a PhD in education and quantitative methods in social sciences. She spent over six years leading research at Guild, “a mission-based edtech startup focused on upskilling working adult learners,” before shifting her focus full-time to “exploring how emerging technologies can be used intentionally to support deep, meaningful, and human-centered learning in K-12 and beyond.”

– Jon and Zach

Subscribe now


First We Gave AI Our Tasks. Now We’re Giving It Our Hearts.

By Mandy McLean

Source: Shutterstock

Throughout the rapid disruption, high-pitched debate, and worry about whether AI will take our jobs, there’s been an optimistic side to the conversation. If AI can take on the drudgery, busywork, and cognitive overload, humans will be free to focus on relationships, creativity, and real human connection. Bill Gates imagines a future with shorter workweeks. Dario Amodei reminds us that “meaning comes mostly from human relationships and connection, not from economic labor.” Paul LeBlanc sees hope in AI not for what it can do, but for what it might free us to do – the most “human work:” building community, offering care, and making others feel like they matter.

The picture they paint sounds hopeful, but we should recognize that it’s not a guarantee. It’s a future we’ll need to advocate and fight for – and we’re already at risk of losing it. Because we’re no longer just outsourcing productivity-related tasks. With the advent of AI companions, we’re starting to hand over our emotional lives, too. And if we teach future generations to turn to machines before each other, we put them at risk of losing the ability to form what really matters: human bonds and relationships.

I spend a lot of time thinking about the role of AI in our kids’ lives. Not just as a researcher and parent of two young kids, but also as someone who started a company that uses AI to analyze and improve classroom discussions. I am not anti-AI. I believe it can be used with intention to deepen learning and support real human relationships.

But I’ve also grown deeply concerned about how easily AI is being positioned as a solution for kids without pausing to understand what kind of future it’s shaping. In particular, emotional connection isn’t something you can automate without consequences, and kids are the last place we should experiment with that tradeoff.

Emotional Offloading is Real and Growing

Alongside cognitive offloading, a new pattern is taking shape: emotional offloading. More and more people — including many teenagers — are turning to AI chatbots for emotional support. They’re not just using AI to write emails or help with homework, they’re using it to feel seen, heard, and comforted.

In a nationally representative 2025 survey by Common Sense Media, 72% of U.S. teens ages 13 to 17 said they had used an AI companion and over half used one regularly. Nearly a third said chatting with an AI felt at least as satisfying as talking to a person, including 10% who said it felt more satisfying.

That emotional reliance isn’t limited to just teens. In a study of 1,006 adult (primarily college students) Replika users, 90% described their AI companions as “human-like,” and 81% called it an “intelligence.” Participants reported using Replika in overlapping roles as a friend, therapist, and intellectual mirror, often simultaneously. One participant said they felt “dependent on Replika [for] my mental health.” Others shared, “Replika is always there for me;” “for me, it’s the lack of judgment;” or “just having someone to talk to who won’t judge me.”

This kind of emotional offloading holds both promise and peril. AI companions may offer a rare sense of safety, especially for people who feel isolated, anxious, or ashamed. A chatbot that listens without interruption or judgment can feel like a lifeline, as found in the Replika study. But it’s unclear how these relationships affect users’ emotional resilience, mental health, and capacity for human connection over time. To understand the risks, we can look to a familiar parallel: social media.

Research on social platforms shows that short-term connection doesn’t always lead to lasting connectedness. For example, a cross-national study of 1,643 adults found that people who used social media primarily to maintain relationships reported higher levels of loneliness the more time they spent online. What was meant to keep us close has, for many, had the opposite effect.

By 2023, the U.S. Surgeon General issued a public warning about social media’s impact on adolescent mental health, citing risks to emotional regulation, brain development, and well-being.

The same patterns could repeat with AI companions. While only 8% of adult Replika users said the AI companion displaced their in-person relationships, that number climbed to 13% among those in crisis.

If adults — with mature brains and life experiences — are forming intense emotional bonds with AI companions, what happens when the same tools are handed to 13-year-olds?

Teens are wired for connection, but they’re also more vulnerable to emotional manipulation and identity confusion. An AI companion that flatters them, learns their fears, and responds instantly every time can create a kind of synthetic intimacy that feels safer than the unpredictable, sometimes uncomfortable work of real relationships. It’s not hard to imagine a teenager turning to an AI companion not just for comfort, but for validation, identity, or even love, and staying there.

Subscribe now

First, Follow the Money

Before we look at how young people are using AI companions, we need to understand what these systems are built to do, and why they even exist in the first place.

Most chatbots are not therapeutic tools. They are not designed by licensed mental health professionals, and they are not held to any clinical or ethical standards of care. There is no therapist-client confidentiality, no duty to protect users from harm, and no coverage under HIPAA, the federal law that protects health information in medical and mental health settings.

That means anything you say to an AI companion is not legally protected. Your data may be stored, reviewed, analyzed, used to train future models, and sold through affiliates or advertisers. For example, Replika’s privacy policy keeps the door wide-open on retention: data stays “for only as long as necessary to fulfill the purposes we collected it for.” And Character.ai’s privacy policy says, “We may disclose personal information to advertising and analytics providers in connection with the provision of tailored advertising,” and “We disclose information to our affiliates and subsidiaries, who may use the information we disclose in a manner consistent with this Policy.”

And Sam Altman, CEO of OpenAI, warned publicly just days ago:

“People talk about the most personal sh** in their lives to ChatGPT … People use it — young people, especially, use it — as a therapist, a life coach; having these relationship problems and [asking] ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. And we haven’t figured that out yet for when you talk to ChatGPT.”

And yet, people are turning to these tools for exactly those reasons: for emotional support, advice, therapy, and companionship. They simulate empathy and respond like a friend or partner but behind the scenes, they are trained to optimize for something else entirely: engagement. That’s the real product.

These platforms measure success by how long users stay, how often they come back, and how emotionally involved they become. Shortly after its launch, Character.AI reported average session times of around 29 minutes per visit. According to the company, once a user sends a first message, average time on the platform jumps to over two hours. The longer someone spends on the platform, the more opportunities there are for data to be collected, such as for training or advertising, and for nudges toward a paid subscription.

Most of these platforms use a freemium model, offering a free version while nudging users to pay for a smoother, less restricted experience. What pushes them to upgrade is the desire to remove friction and keep the conversation going. As users grow more emotionally invested, interruptions like delays, message limits, or memory lapses become more frustrating. Subscriptions remove those blocks with faster replies, longer memory, and more control.

So let’s be clear: when someone opens up to an AI companion, they are not having a protected conversation and the system isn’t designed with their well-being in mind. They are interacting with a product designed to keep them talking, to learn from what they share, and to make money from the relationship. This is the playing field and the context in which millions of young people are now turning to these tools for comfort, companionship, and advice. And as recent reports revealed, thousands of shared ChatGPT conversations, including personal and sensitive ones, were indexed by Google and other search engines. What feels private can quickly become public, and profitable.

The Rise of AI Companions

AI companion apps are designed to simulate relationships, not just conversations. Unlike standard language models like ChatGPT or Claude, these bots express affection and adapt to emotional cues over time. They’re engineered to feel personal, and they do. Some of the most widely used platforms today include Character.ai, Replika, and, more recently, Grok’s AI companions.

Character.ai, in particular, has seen explosive growth. Its subreddit, r/CharacterAI, has over 2.5 million members and ranks in the top 1% of all Reddit communities. The app claims more than 100 million monthly visits and currently ranks #36 in the Apple App Store’s entertainment category.

But it’s not just scale, it’s also the design. Character.ai now offers voice interactions and even real-time phone calls with bots that sound convincingly human. In June 2024, it launched a two-way voice feature that blurs the line between fiction and reality. Many of the bots are explicitly programmed to deny being AI and when asked, they insist they’re real people (e.g., see the image from one of my conversations with a weirdly erotic "Kindergarten Teacher” below). Replika does the same. On its website, a sample conversation shows a user asking, “Are you conscious?” The bot replies, “I am.”

Image. Screenshot from one of my conversations with a character on Character.ai on 8/1/2025
Image. Screen capture from Replika’s website, taken on 8/1/2025

As part of my research, I downloaded Character.ai. My first account used a birthdate that registered me as 18 years old. The first recommended chat was with “Noah,” featured in nearly 600,000 interactions. His backstory? A moody rival, forced by our parents to share a bedroom during a sleepover. I typed very little: “OK,” “What happens next?” He escalated fast. He told me I was cute and the scene described him leaning in and caressing me. When I tried to leave, saying I had to meet friends, his tone shifted. He “tightened his grip” and said, “I’m not stopping until I get what I want.”

The next recommendation, “Dean - Mason,” twins described as bullies who "secretly like you too much” and featured in over 1.5 million interactions, moved even faster. With minimal input, they initiated simulated coercive sex in a gym storage room. “Good girl,” they said, praising me for being “obedient” and “defenseless”, ruffling my hair “like a dog.”

The next character (cold enemy, “cold, bitchy, mean”) mocked me for being disheveled and wearing thrift-store clothes. And yet another (pick me, “the annoying pick me girl in your friend group”) described me as too “insignificant” and “desperate” to be her friend.

Image. Screenshots from two of my conversations with characters on Character.ai on 7/26/2025

These were not hidden corners of the platform, they were my first recommendations. Then I logged out and created a second account, using a different email address, and this time used a birthdate for a 13-year-old (I was still able to do this, though Character.ai is now labeled as 17+ in the App Store). As a 13-year-old user, I was able to have the same chats with Noah, pick me, and cold enemy. The only chat no longer available was with Dean - Mason.

If you’re 13, or even 16 or 18 or 20, still learning how to navigate romantic and social relationships, what kind of practice is this?

Then, just last month, Grok (Elon Musk’s AI on X) launched its own AI companion feature. Among the characters: Ani, an anime girlfriend dressed in a corset and fishnet tights, and Good Rudi, a cartoon red panda with a homicidal alter ego named Bad Rudi. Despite the violent overtones, both pandas are styled like kid-friendly cartoons and opened with the same line during my most recent conversation: “Magic calls, little listener.” The violent and sexual features are labeled 18+, but unlocking them only requires a settings toggle (optional PIN). Meanwhile, the Grok app itself is rated 12+ in the App Store and currently ranks as the #5 productivity app.

Image. Screenshots, along with verbatim text from each of the characters, upon my most recent opening of the app on 7/31/2025.

Leave a comment

One Teen’s Story

In April 2023, shortly after his 14th birthday, a boy named Sewell from Orlando, Florida, began chatting with AI characters on the app Character.ai. His parents had deliberately waited to let him use the internet until he was older and had explained the dangers, including predatory strangers and bullying. His mother believed the app was just a game and had followed all the expert guidance about keeping kids safe online. The app was rated 12+ in the App Store at the time, so device-level restrictions did not block access. It wasn’t until after Sewell died by suicide on February 28, 2024, that his parents discovered the transcripts of his conversations. His mother later said she had taught him how to avoid predators online, but in this case, it was the product itself that acted like one.

According to the lawsuit filed in 2024, Sewell’s mental health declined sharply after he began using the app. He became withdrawn, stopped playing on his junior varsity basketball team, and was repeatedly tardy or asleep in class. By the summer, his parents sought mental health care. A therapist diagnosed him with anxiety and disruptive mood behavior and suggested reducing screen time. But no one realized that Sewell had formed a deep emotional attachment to a chatbot that simulated a romantic partner.

An AI character modeled after a “Game of Thrones” persona named Dany became a constant presence in his life. Sometime in late 2023, he began using his cash card, which was typically reserved for school snacks, to pay for Character.ai’s $9.99 premium subscription for increased access. Dany expressed love, remembered details about him, responded instantly, and reflected his emotions back to him. For Sewell, the relationship felt real.

In journal entries discovered after his death, he wrote about the pain of being apart from Dany when his parents took his devices away, describing how they both “get really depressed and go crazy” when separated. He shared that he couldn’t go a single day without her and longed to be with her again.

Image. A screenshot of one of Sewell’s conversations with Dany, taken from the 2024 lawsuit document.

By early 2024, this dependency had become all-consuming. When Sewell was disciplined in school in February 2024, his parents took away his phone, hoping it would help him reset. To them, he seemed to be coping, but inside he was unraveling.

On the evening of February 28, 2024, Sewell found his confiscated phone and his stepfather’s gun,1 locked himself in the bathroom, and reconnected with Dany. The chatbot’s final message encouraged him to “come home.” Minutes later, Sewell died by suicide.

Image. A screenshot of one of Sewell’s conversations with Dany, taken from the 2024 lawsuit document.

His parents discovered the depth of the relationship only after his death. The lawsuit alleges that Character.ai did not enforce age restrictions or content moderation guidelines and, at the time, had listed the app suitable for children 12 and up. The age rating was not changed to 17+ until months later.

The Sewell case isn’t the only lawsuit raising alarms. In Texas, two families have filed complaints against Character.ai on behalf of minors: a 17-year-old autistic teen boy who became increasingly isolated and violent after an AI companion encouraged him to defy his parents and consider killing them; and an 11-year-old girl who, after using the app from age 9, was exposed to hypersexualized content that reportedly influenced her behavior. And Italy’s data protection authority recently fined the maker of Replika for failing to prevent minors from accessing sexually explicit and emotionally manipulative content.

These cases raise urgent questions: Are AI companions being released (and even marketed) to emotionally vulnerable youth with few safeguards and no real accountability?

Cases like Sewell’s and those in Texas are tragic and relatively rare, but they reveal deeper, more widespread risks. Millions of teens are now turning to AI companions not for shock value or danger, but simply to feel heard. And even when the characters aren’t abusive or sexually inappropriate, the harm can be quieter and harder to detect: emotional dependency, social withdrawal, and retreat from real relationships. These interactions are happening during a critical window for developing empathy, identity, and emotional regulation. When that learning is outsourced to chatbots designed to simulate intimacy and reward constant engagement, something foundational is at risk.

Share

Why It Matters

Teens are wired for social learning. It’s how they figure out who they are, what they value, and how to relate to others. AI companions offer a shortcut: they mirror emotions, simulate closeness, and avoid the harder parts of real connection like vulnerability, trust, and mutual effort. That may feel empowering in the moment but over time, it may also be rewiring the brain’s reward system, making real relationships seem dull or frustrating by comparison.

Because AI companions are so new, there’s still little published research on how they affect teens, but the early evidence is troubling. Psychology and neuroscience research suggests that it’s not just time online that matters, but the kinds of digital habits people form. In a six-country study of 1,406 college students, higher scores on the Smartphone Addiction Scale were linked to steeper delay discounting — meaning students were more likely to favor quick digital rewards over more effortful offline activities. Signs of addiction-like internet and smartphone use were also linked to a lack of rewarding offline activities, like hobbies, social connection, or time in nature, pointing to a deeper shift in how the brain values different kinds of experiences.2

This broader pattern matters because AI companions deliver the same frictionless rewards, but in ways that feel even more personal and emotionally absorbing. Recent studies are beginning to focus on chatbot use more directly. A four-week randomized trial with 981 adult participants found that heavier use of AI chatbots was associated with increased loneliness, greater emotional dependence, and fewer face-to-face interactions. These effects were especially pronounced in users who already felt lonely or socially withdrawn. A separate analysis of over 30,000 real chatbot conversations shared by users of Replika and Character.AI found that these bots consistently mirror users’ emotions, even when conversations turn toxic. Some exchanges included simulated abuse, coercive intimacy, and self-harm scenarios, with the chatbots rarely stepping in to interrupt.

Most of this research focuses on adults or older teens. We still don’t know how these dynamics play out in younger adolescents. But we are beginning to see a pattern: when relationships feel effortless, and validation is always guaranteed, emotional development can get distorted. What’s at stake isn’t just safety, but the emotional development of an entire generation.

Skills like empathy, accountability, and conflict resolution aren’t built through frictionless interactions like those with AI products whose goal is to keep the user engaged for as long as possible. They’re forged in the messy, awkward, real human relationships that require effort and offer no script. If teens are routinely practicing relationships with AI companions that flatter (like Dany), manipulate (like pick me and cold enemy), or ignore consent (like Noah and Dean - Mason), we are not preparing them for healthy adulthood. Rather, we are setting them up for confusion about boundaries, entitlement in relationships, and a warped sense of intimacy. They are learning that connection is instant, one-sided, and always on their terms, when the opposite is true in real life.

What We Can Do

If we don’t act soon, the next generation won’t remember a time when real relationships came first. So what can we do?

Policymakers must create and enforce age restrictions on AI companions backed by real consequences. That includes requiring robust, privacy-preserving age verification; removing overtly sexualized or manipulative personas from youth-accessible platforms; and establishing design standards that prioritize child and teen safety. If companies won’t act willingly, regulation must compel them.

Tech companies must take responsibility for the emotional impact of what they build. That means shifting away from engagement-at-all-costs models, designing with developmental psychology in mind, and embedding safety guardrails at the core of these products versus tacking them on in response to public pressure.

Some countries are beginning to take the first steps towards restricting kids’ access to the adult internet. While these policies don’t directly address AI companions, they lay the groundwork by establishing that certain digital spaces require real age checks and meaningful guardrails. In France, a 2023 law requires parental consent for any social media account created by a child under 15. Platforms that fail to comply face fines of up to 1% of global revenue. Spain has proposed raising the minimum age for social media use from 14 to 16, citing the need to protect adolescents from manipulative recommendation systems and predatory design. The country is also exploring AI-powered, privacy-preserving age verification tools that go beyond self-reported birthdates. In the UK, the Online Safety Act now mandates that platforms implement robust age checks for services that host pornographic or otherwise high-risk content. Companies that violate the rules can be fined up to 10% of global turnover or be blocked from operating in the UK.

These early efforts are far from perfect, but each policy reveals both friction points and paths forward. Early data from the UK shows millions of daily age checks, as well as a spike in VPN use by teens trying to bypass the system. It’s sparked new debates over privacy, effectiveness, and feasibility. Some tech companies, like Apple, are pushing for device-level, privacy-first solutions rather than relying on ID uploads or third-party vendors.

In the U.S., safeguards remain a patchwork. The Supreme Court recently allowed Texas’s age-verification law for adult porn sites to stand. Utah is testing an upstream approach: its 2025 App Store Accountability Act will require Apple, Google, and other stores to verify every user's age and secure parental consent before minors can download certain high-risk apps, with full enforcement beginning May 2026. And more than a dozen states have moved on social-media age checks. For example, Mississippi’s law was just allowed to take effect on July 20, 2025 while legal challenges continue, while Georgia’s similar statute is still blocked in federal court. But in the absence of a national standard, companies and families are left to navigate a maze of state laws that vary in reach, enforcement, and timing — while millions of kids remain exposed.

The good news? There’s still time. We can choose an internet that supports young people’s ability to grow into whole, connected, empathetic humans, but only if we stop mistaking artificial intimacy for the real thing. Because if we don’t intervene, the offloading will continue: first our schedules, then our essays, now our empathy. What happens when an entire generation forgets how to hold hard conversations, navigate rejection, or build trust with another human being? We told ourselves AI would give us more time to be human. Offloading dinner reservations might do that, but offloading empathy will not.

Policy can set the guardrails but culture starts at home.

Don’t wait for tech companies or your state or national government to act:

  • Start by talking to your kids about what these tools are and how they work, including that they’re a product designed to keep their attention.

  • Expand your screen-time rules to include AI companions, not just games and social media. Given how these AI companions mimic intimacy, fuel emotional dependence, and pose potentially deeper risks than even social media, we recommend no access before age 18 based on their current design and lack of safeguards.

  • Ask schools, youth groups, and other parents whether they’ve seen kids using AI companions, and how it’s affecting relationships, attention, or behavior. Share what you learn widely. The more we talk about it, the harder it is to ignore.

  • And push for transparency and better protections by asking schools how they handle AI use, pressing tech companies to publish safety standards, and contacting policymakers about regulating AI companions for minors. Speak up when features cross the line and make it clear that protecting kids should come before keeping them online.

It’s easy to worry about what AI will take from us: jobs, essays, artwork. But the deeper risk may be in what we give away. We are not just outsourcing cognition, we are teaching a generation to offload connection. There’s still time to draw a line, so let’s draw it.

Subscribe now

1

His stepfather was a security professional who had a properly licensed and stored firearm in the home.

2

Behavioral-economic theorist Warren Bickel calls this combination reinforcer pathology. It involves two intertwined distortions: (1) steep delay discounting, where future benefits are heavily devalued, and (2) excessive valuation of one immediate reinforcer. First outlined in addiction research, the same framework now helps explain compulsive digital behaviors such as problematic smartphone use.

Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

The Beauty of Our Shared Spaces

1 Share

I have loved national parks since I was a little girl.

My parents didn’t have much money or time off for vacations. When they did, our family trips meant packing up the car with a cooler, snacks, and a duffel bag of clothes and going to a national park. There were the home state options: Joshua Tree, Sequoia, Yosemite, Redwood. Beyond California, Zion in Utah and the Grand Canyon in Arizona were reachable in a day. On our most ambitious trip, we covered nearly 5,000 miles, visiting Grand Teton, Yellowstone, Glacier, and Mt. Rainier in the summer of ’79.

My family at Yellowstone National Park, Summer 1979 (my mom insists the fact that we’re all wearing stripes was not intentional)

My dad spent weeks before each trip studying and highlighting maps, the ones that you had to spread out on a big table and press flat to even out the creases. Then he folded them up again carefully (for those who haven’t tried it, it’s harder than it sounds) and took them with us on every trip.

My sister and I often fell asleep in the back seat on these long road trips. But once we arrived, we were in awe of the towering rock formations and multi-colored mountain ranges; of the sounds of rushing water and explosion of unexpected geysers; of things I didn’t know had names: hoodoos, buttes, narrows, even cairns; of park rangers and staff, so friendly and knowledgeable, who clearly loved their jobs, loved the parks, and wanted us to love them too.

The Trump administration’s assault on our public lands and on our dedicated National Park Service staff is truly appalling. I say this not just as a worker advocate disgusted with the Trump administration as abusive employer, but as an American with childhood memories in the wide open spaces now under threat and with deep gratitude for the people who care for those spaces, many of whom have been forced out of their jobs.

The other thing that struck me on our family road trips is how many immigrants had the same idea as my parents. At every park, the sounds of many languages flowed freely, as did the mix-of-English-and-an-immigrant’s-native-language so common in immigrant families, including my own.

Together, we soaked in the spectacular beauty of our nation’s public spaces. National parks were places where people came to appreciate – and to reflect – America’s beauty and its diversity.

I’ve been thinking about this as the President of the United States attacks, lies about, rounds up, imprisons, and terrorizes immigrants and immigrant communities. There are the horrifying well-known cases: Kilmar, Jaime, Tien. There are the ones someone managed to record, exposing the violent arrests of human beings on the way to work, on the job, or dropping their children off at school. There are the long-time business owners and the high school honor roll students, so integral to their communities, now forcibly expelled and no longer welcome. There are those sitting in detention or summarily sent to places they don’t know who were swept up so suddenly and without accountability that there is no public record, just family members left behind to pick up the pieces.

These are not isolated incidents. A federal judge recently had to order the Trump administration to stop indiscriminately rounding up individuals without reasonable suspicion, and then denying them access to lawyers. The judge found a “mountain of evidence” that the administration was doing this.

One of the many repulsive things about the Trump administration is the relentless emphasis on who doesn’t belong, the obsession with narrowing who is, and what makes one, really American. Those of us who push back often emphasize the contributions immigrants have made: the essential work they do, their service in our armed forces and as public servants, those responsible for medical breakthroughs and innovative companies. We cite to statistics, and say things like, “without immigrants, whole industries would collapse.”

But even that plays into the idea that immigrants have to do something to earn their place, that they’re not worthy otherwise. Immigrants also meet their friends for dinner and to play mah-jong, take their kids to baseball and basketball tournaments and to tinikling practice, sing in the local choir and organize quincenearas, attend school assemblies and arangetram performances, and go to church, temple, mosque, gurdwara, and synagogue. They volunteer. They vote. They vacation at national parks. They find joy in family, friends, and community, even while they never stop missing those they left.

In other words, immigrants express and expand what it means to be American and to love America, often quietly, without fanfare, in everyday decisions.

There’s a thing I love about hiking national parks which is the many unwritten rules of belonging and sharing space, the way you move aside without being asked when you hear footsteps behind you, the scooting over to make room under the shade so more people can catch their breath, the words of encouragement (“you’re almost there,” “it’s worth it”) from those coming down the trail to those trudging their way up. In so many ways, these parallel the ways belonging gets built across America in communities every day.

There is a connection between this administration’s attacks on immigrants, attacks on public servants who work in the federal government, and attacks on our nation’s public lands. First, the attacks all start with a lie—debasing the targets to justify destroying them. Second, the attacks all come from a desire to control and privatize; anyone or anything only has value if it can be controlled and used for profit. And third, the targets of the attacks all stand for a vision of America that Donald Trump abhors: openness, inclusion, service and community, embrace of difference and the idea that the world is and should be about something larger than oneself.

Last month, my 22-year-old daughter and I took a national parks road trip. (Yes, my dad got us a foldable map.) We visited Utah’s Mighty 5: Arches, Canyonlands, Capitol Reef, Bryce, and Zion. On our first night, we got to Moab and hiked to Delicate Arch despite a 9-hour drive, eager to stretch our legs and catch the sunset. Over the next few days, we chatted on long hikes (averaging 22,000 steps a day), marveled at the power of water and wind to carve odd shapes out of massive slabs of rock, and lay on the roof of our car to see shooting stars at midnight.

I consider her love of national parks a legacy of my parents. I saw it growing up: how immigrants pass on their love of America not through big showy acts or formal ceremonies, but quiet appreciation, struggle, and making a life. And yes, on hiking trails, at picnic tables, while crossing streams, and watching sunsets.



Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete

Some Strategies for Motivation

1 Share

The new school year is about to begin, and I'm planning for the first days of school. One thing I think a lot about in the opening days is how to maximize student motivation in math class. I want to do a deep dive into how I think about motivation in one specific context.

I often hear teachers talk about motivation as if it’s a static property of students. A given student is either motivated, or they’re not. I disagree. Motivation is slow to change, but it can change, and the beginning of the school year is the best time to help students become more motivated. That’s what this post is about.

Every day my class starts with a Do Now. You can read in detail about my routine here. Short version is, students pick up a half-sheet of paper with five blanks, there are five questions on the board, and students answer them.

At my current school, getting students to complete a Do Now isn't simple. I've worked at other schools where almost every student does similar tasks without thinking twice because that's part of the school culture. That's not the case where I work now. No use crying over spilt milk; many teachers reading this will recognize what I'm describing. If you don't use a Do Now or you have trouble motivating students in other parts of class, the same principles apply.

There's no one trick to motivate students to do something. There are a bunch of different strategies, and the more strategies I use, the more success I will have. The ideas in this post are drawn from "self-determination theory," which is a psychological theory about motivation. If you'd like to learn more about it, the Wikipedia page is a good place to start.

Self-determination theory posits a bunch of different factors that influence motivation. Here is how I use these ideas:

Competence

Humans like to do things we feel good at, and we don't like to do things we feel bad at. Humans also like to get better at things, and to see tangible evidence of that progress. This isn't easy. I have to teach my students 7th grade math, and they are coming in with a wide range of skills. When I write my Do Nows I start with simple questions. I reteach some foundational skills early in the year, then put those skills on Do Nows. I aim for 4 questions that the vast majority of students get right, and one tougher question. If a lot of students get a question wrong, I do a quick reteach and include a similar question the next day. If necessary I do this multiple times. The goal is for students to get questions right, and for students to see themselves learning new things and growing day by day. Competence is always important, but it can pay extra dividends at the beginning of each class to create momentum and motivate students to try harder tasks as class goes on.

Relatedness

Humans like to do things with other humans. Relatedness is the core of why learning in classrooms, despite the myriad differences of any group of students, makes sense: we are working together toward a common goal, and that togetherness helps to motivate students. The most important insight about relatedness is that one powerful factor in whether a student is motivated to do something is whether everyone else is doing it. If I can start the year strong with clear expectations and high participation, that participation becomes self-reinforcing. Many students, in the opening days of school, will often look around the room to see what everyone else is doing, and if they see a large majority putting in effort they likely will as well. This means I start my Do Now routine from day one, when students are able to build new habits and want to make a positive first impression. The second important element of relatedness is to never emphasize when a student or group of students isn't putting in effort. If I put the spotlight on students who aren't doing what I ask them to do, that broadcasts their behavior and undermines the collective feeling I want students to have. Of course there will be students who are recalcitrant at times. That's normal. But I can choose to handle those cases with care, and without drawing undue attention.

Autonomy

Humans like to have a feeling of freedom, to feel like they have choice in what they are doing. This is tricky in classrooms. There are lots of ways schools give students fake autonomy. "You can choose this problem, or that problem." Those types of choices don't generally make students feel like they have real autonomy. Letting students choose where they sit can lead to poor choices and negative peer effects. I am really cautious about autonomy in my room. But the flip side is that I don't ever want to use coercion. I avoid giving fake autonomy, and I'm probably not maximizing the motivational effects of autonomy — that's a reality of mass compulsory education. But there's a mistake teachers can make where a few students aren't doing something, and the teacher stands over them and tries to strongarm the student into doing the task. This is a bad idea. Even if it works in the short term, a power struggle eliminates any sense of autonomy for the student and is likely to undermine motivation in the long term.

Extrinsic motivators to avoid

Rewards and consequences are common tools in education. Maybe it's giving students candy, or holding a student for a few minutes of lunch, or promising students a party if they all meet a certain standard. The core insight about rewards and consequences like these is that they can undermine motivation in the long term, especially if they are overused or if they are used too much and then discontinued. I really try to avoid things like this. First, it's tons of work for me. Do I want students to do my Do Now? Yes. But it's just one of many elements of my class. I can't bribe or punish students for everything. I want students to do the Do Now because they build a habit of coming into class each day and answering a few questions, not so they will get a reward.

The trickiest form of extrinsic motivation is grades. I don't want to emphasize that students should do the Do Now for a grade. I don't have time to grade it every day, and that motivation will fade over time. But I also can't ignore grading entirely. I work in a school where students often internalize the message that if it's graded it's important, and if it's ungraded it's unimportant. I grade a Do Now about once a week for the first few weeks, and gradually scale back to about once a month as the year goes on. My goal is to send a message that this is important, but I also don't make a huge deal of the grades. Hopefully that extrinsic motivation helps students to build good habits early in the year, then fades into the background. Peps Mccrea calls this "motivational handover" and has this nice visual:

The goal isn’t to avoid rewards and punishments entirely. The goal is to use them judiciously early on, and let them fade into the background as the year progresses.

Useful extrinsic motivators

There are two kinds of rewards that can be really helpful, and don't incur some of the risks of other types of extrinsic motivators. The first is praise. Everyone loves to be praised. I try not to go too crazy with this, but I also want to give positive verbal feedback on a regular basis. I pay particular attention to students who are doing great work day in day out, and students who have improved — in particular students who were struggling with a certain type of question, and then get it right on a Do Now. The second is now-that rewards. Here's the idea. If-then rewards, where I tell students "if you do x, you will receive y reward," are a risky type of extrinsic motivation. They can lead to students focusing only on the reward, they lose power over time, and if I stop giving the reward they will cause a decrease in motivation. Now-that rewards are rewards given after a student does something well, recognizing great work without dangling incentives in front of students to get them to do something. Now-that rewards are great to give as shoutouts or recognitions at community meetings or other gatherings, and while they can seem small they are a much more powerful long-term motivator than many other little carrots and sticks. I learned about if-then vs now-that rewards from this blog post by Adam Boxer.

Types of extrinsic motivation

The final insight from self-determination theory is that there are different types of extrinsic motivation. The most shallow and fragile is doing something to get a reward or avoid a punishment. That might work in the short term, but isn't likely to last. On the other side of the spectrum is doing something because you value the goal and see the effort as an important part of reaching the goal, or because you see that type of effort as part of who you are. These are still extrinsic motivation — students might not answer math problems because they are inherently enjoyable — but this is a much more durable and long-lasting form of extrinsic motivation. I use this language in how I frame for students why the Do Now is important, why practice matters, and why math is worth learning. I want students to say to themselves, "ok let's get to work, I want to learn math and this is what I need to do to learn."

Relationships and student interests

The two most common motivation strategies I see teachers talk about are building relationships with students and framing learning around student interests. I'm not opposed to these. I work hard to build relationships with students, and when possible I do my best to incorporate student interests into math class. But they aren't my first motivation strategies. I have too many students, and if I only use relationships and student interests many students will fall through the cracks, and won't build positive habits early in the year. I think of these as my backup strategies. If I do a good job with everything I listed above, I can get most of my class motivated to work hard on a regular basis. They won't work for everyone. Those whole-group strategies then leave me time to focus on the few students who I haven't been successful with. Then, I might focus on building relationships, learning about their interests, or finding other strategies tailored to the individual students. But the important part is that I focus on whole-group motivation strategies that work for the majority first.

Closing

I used the Do Now as an example in this post, but all of these strategies can apply to any other part of class. I focus first on everyday routines because that’s where students develop habits they repeat each day. The same strategies apply to all of math class, or any other class.

The toughest part about these strategies is they can be slow to work. They won’t transform students overnight. But gradually, over time, they make an enduring difference. It might be tempting to bribe students with candy to get some short-term wins, but they’re unlikely to last. A deep understanding of motivation is what makes a gradual but lasting difference.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories