671 stories
·
0 followers

Scapegoating the Algorithm

1 Share

America’s epistemic challenges run deeper than social media.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

What the Fuck Python

1 Share
Comments
Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Buck the Billionaires. Create Your Own Internet Services.

1 Share
We now have sufficient applications and hosting options that virtually anyone can run many of their own Internet Services.
Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

AI Will Never Be Your Kid’s Friend (Russell Shaw)

1 Share

Russell Shaw is the head of school at Georgetown Day School in Washington, D.C.”

ChatGPT thinks I’m a genius: My questions are insightful; my writing is strong and persuasive; the data that I feed it are instructive, revealing, and wise. It turns out, however, that ChatGPT thinks this about pretty much everyone. Its flattery is intended to keep people engaged and coming back for more. As an adult, I recognize this with wry amusement—the chatbot’s boundless enthusiasm for even my most mediocre thoughts feels so artificial as to be obvious. But what happens when children, whose social instincts are still developing, interact with AI in the form of perfectly agreeable digital “companions”?

I recently found myself reflecting on that question when I noticed two third graders sitting in a hallway at the school I lead, working on a group project. They both wanted to write the project’s title on their poster board. “You got to last time!” one argued. “But your handwriting is messy!” the other replied. Voices were raised. A few tears appeared.

I kept my distance and after a few minutes, the students appeared to be working purposefully. The earlier flare-up had faded into the background.

That mundane scene captured something important about human development that digital “friends” threaten to eliminate: the productive friction of real relationships.

Virtual companions, such as the chatbots developed by Character.AI and PolyBuzz, are meant to seem like intimates, and they offer something seductive: relationships without the messiness, unpredictability, and occasional hurt feelings that characterize human interaction. PolyBuzz encourages its users to “chat with AI friends.” Character.AI has said that its chatbots can “hear you, understand you, and remember you.” Some chatbots have age restrictions, depending on the jurisdiction where their platforms are used—in the United States, people 14 and older can use PolyBuzz, and those 13 and up can use Character.AI. But parents can permit younger children to use the tools, and determined kids have been known to find ways to get around technical impediments.

The chatbots’ appeal to kids, especially teens, is obvious. Unlike human friends, these AI companions will think all your jokes are funny. They’re programmed to be endlessly patient and to validate most of what you say. For a generation already struggling with anxiety and social isolation, these digital “relationships” can feel like a refuge.

But learning to be part of a community means making mistakes and getting feedback on those mistakes. I still remember telling a friend in seventh grade that I thought Will, the “alpha” in our group, was full of himself. My friend, seeking to curry favor with Will, told him what I had said. I suddenly found myself outside the group. It was painful, and an important lesson in not gossiping or speaking ill of others. It was also a lesson I could not have learned from AI.

As summer begins, some parents are choosing to allow their kids to stay home and “do nothing,” also described as “kid rotting.” For overscheduled young people, this can be a gift. But if unstructured time means isolating from peers and living online, and turning to virtual companions over real ones, kids will be deprived of some of summer’s most essential learning. Whether at camp or in classrooms, the difficulties children encounter in human relationships—the negotiations, compromises, and occasional conflicts—are essential for developing social and emotional intelligence. When kids substitute these challenging exchanges for AI “friendships” that lack any friction, they miss crucial opportunities for growth.

Much of the reporting on chatbots has focused on a range of alarming, sometimes catastrophic, cases. Character.AI is being sued by a mother who alleges that the company’s chatbots led to her teenage son’s suicide. (A spokesperson for Character.AI, which is fighting the lawsuit, told Reuters that the company’s platform has safety measures in place to protect children, and to restrict “conversations about self-harm.”) The Wall Street Journal reported in April that in response to certain prompts, Meta’s AI chatbots would engage in sexually explicit conversations with users identified as minors. Meta dismissed the Journal’s use of its platform as “manipulative and unrepresentative of how most users engage with AI companions” but did make “multiple alterations to its products,” the Journal noted, after the paper shared its findings with the company.

These stories are distressing. Yet they may distract from a more fundamental problem: Even relatively safe AI friendships are troubling, because they cannot replace authentic human companionship.

Consider what those two third graders learned in their brief hallway squabble. They practiced reading emotional cues, experienced the discomfort of interpersonal tension, and ultimately found a way to collaborate. This kind of social problem-solving requires skills that can be developed only through repeated practice with other humans: empathy, compromise, tolerance with frustration, and the ability to repair relationships after disagreement. An AI companion might simply have concurred with both children, offering hollow affirmations without the opportunity for growth. Your handwriting is beautiful! it might have said. I’m happy for you to go first.

But when children become accustomed to relationships requiring no emotional labor, they might turn away from real human connections, finding them difficult and unrewarding. Why deal with a friend who sometimes argues with you when you have a digital companion who thinks everything you say is brilliant?

The friction-free dynamic is particularly concerning given what we know about adolescent brain development. Many teenagers are already prone to seeking immediate gratification and avoiding social discomfort. AI companions that provide instant validation without requiring any social investment may reinforce these tendencies precisely when young people need to be learning to do hard things.

The proliferation of AI companions reflects a broader trend toward frictionless experiences. Instacart enables people to avoid the hassles of the grocery store. Social media allows people to filter news and opinions, and to read only those views that echo their own. Resy and Toast save people the indignity of waiting for a table or having to negotiate with a host. Some would say this represents progress. But human relationships aren’t products to be optimized—they’re complex interactions that require practice and patience. And ultimately, they’re what make life worth living.

In my school, and in schools across the country, educators have spent more time in recent years responding to disputes and supporting appropriate interactions between students. I suspect this turbulent social environment stems from isolation born of COVID and more time spent on screens. Young people lack experience with the awkward pauses of conversation, the ambiguity of social cues, and the grit required to make up with a hurt or angry friend. This was one of the factors that led us to ban phones in our high school last year—we wanted our students to experience in-person relationships and to practice finding their way into conversations even when doing so is uncomfortable.

This doesn’t mean we should eliminate AI tools entirely from children’s lives. Like any technology, AI has practical uses—helping students understand a complex math problem; providing targeted feedback when learning a new language. But we need to recognize that AI companions are fundamentally different from educational or creative AI applications. As AI becomes more sophisticated and ubiquitous, the temptation to retreat into frictionless digital relationships will only grow. But for children to develop into adults capable of love, friendship, and cooperation, they need to practice these skills with other humans—mess, complications, and all. Our present and future may be digital. But our humanity, and the task of teaching children to navigate an ever more complex world, depends on keeping our friendships analog



Read the whole story
mrmarchant
2 days ago
reply
Share this story
Delete

Why Most Feedback Shouldn’t Exist

1 Share

I used to give feedback for everything. That engineer who preferred written communication? Feedback. The developer who worked strange hours but always delivered? Feedback.

Until one of them asked me: “Sorry, but how is this affecting my work or the team?”

I had nothing. I was just personally uncomfortable with their style because it was different from mine.

That’s when I realized something that changed how I manage: no impact, no feedback. If someone’s behavior isn’t actually affecting outcomes, I need to keep my mouth shut.

Here’s what happens in most organizations: we hire talented people with different backgrounds and working styles. Then we spend enormous energy trying to sand down their edges until everyone acts the same way. We call it “culture fit” or “professional development,” but often it’s just personality policing.

The solution is simpler than you think: stop giving feedback unless you can prove it matters. Before giving feedback, ask yourself:

  1. Is there a measurable impact on work outcomes? Are deadlines being missed? Is quality suffering?
  2. Is it affecting team collaboration? Are others unable to work effectively with this person? Are team goals being compromised?
  3. Is it creating a hostile environment? Is someone being disrespectful? Is psychological safety being damaged?

If you can’t point to a specific, concrete impact, then what you have isn’t feedback — it’s a preference. And preferences aren’t performance issues.

Also… when we give feedback about every little thing that bothers us, we create feedback fatigue. It’s like the boy who cried wolf — when everything is “an opportunity for growth,” nothing is. People stop listening. Worse, they start second-guessing every natural instinct, every authentic behavior. We reduce psychological safety as team members start hiding their authentic selves. We lose the variety of perspectives by accidentally optimizing for homogeneity. And we damage trust — people feel micromanaged and criticized for just being themselves.

This doesn’t mean becoming a manager who never gives feedback. When behavior genuinely impacts outcomes (e.g., missed deadlines, poor quality, team dysfunction) that feedback is crucial. The difference is you can point to specific consequences, not just discomfort.

Sometimes a behavior genuinely bothers you but has no measurable impact. Their communication style feels abrupt. Their work schedule makes you anxious. Their approach to problem-solving is completely different from yours.

This is where you need to manage yourself, not them. Our discomfort with difference often masquerades as concern for the team. But teams don’t need everyone to act the same way — they need everyone to contribute effectively toward shared goals.

The point of feedback isn’t to create an army of clones who all work, think, and communicate identically. It’s to help people be more effective in their roles and help teams achieve their objectives.

Next time you’re tempted to give feedback, pause. Look for the impact. If you can’t find it, save your breath (and their time).

Because sometimes the behavior that needs changing isn’t theirs. It’s yours.





Read the whole story
mrmarchant
3 days ago
reply
Share this story
Delete

The grammar of a god-ocean

1 Share

Photo of an ocean sunset seen through a round window with a warm sepia tone across the sky and water.

To truly explore alien languages, linguists must open themselves to the maximum conceivable degree of cosmic otherness

- by Eli K P William

Read at Aeon

Read the whole story
mrmarchant
3 days ago
reply
Share this story
Delete
Next Page of Stories