1702 stories
·
2 followers

choosing friction

2 Shares

In 2018, legal scholar Tim Wu wrote in the New York Times that:

Today’s cult of convenience fails to acknowledge that difficulty is a constitutive feature of human experience. Convenience is all destination and no journey.

This piece well predates the current AI boom, but “all destination and no journey” is a pretty good explanation for why using AI to create art is mainly compelling to people who think about creativity in terms of producing content and generating intellectual property. They just want the thing they can market and sell for money or clout; they don’t care how they got there.

I know you’re sick of talking about AI. I am too. This is only a little bit about AI, I promise. Like all my writing about technology, it’s mostly about people.

I am reading David Graeber and David Wengrow’s The Dawn of Everything, slowly, in pieces, meeting over months with a book club that has become a load-bearing pillar of my intellectual community. I recommend it, the book and the book club both. Graeber and Wengrow introduced me to the idea of schismogenesis—the process of forming social divisions—which happens not by chance but through deliberate choices people make within an in-group to differentiate themselves from some out-group. The way Canadians define themselves in opposition to Americans, say. Or the way AI haters (complimentary) refuse to engage with generative AI.

It’s possible to see this with despair or derision: who are we becoming that even the use of tools and technologies has become a matter of identity? But the fun thing about reading anthropology is learning how much humans have always been Like This™. There were tribes who refused to adopt agriculture not because they didn’t know about it, but because the tribe up the road does agriculture and they’re not like them. Groups that refused to domesticate cattle because it was important to their group identity to see bulls as wild and untamed. I refuse to use generative AI because I simply don’t want to be the kind of person who uses generative AI.

The promise of AI is that it removes friction. It doesn’t matter whether it can actually fulfill that promise, it matters that the sovereign wealth funds with seemingly infinite pockets and patience for Sam Altman’s megalomania believe it can. In their ideal world, you don’t have to think about anything because an AI will do your thinking for you, and so you can fire everyone whose job it was to think. In this ideal world, they never have to think about other people at all, whose desires and needs and rights might come into conflict with their whims. I don’t know where they imagine we’ll have gone; six feet under, probably.

I quite like thinking, and I think humans should do more of it, and I think the less we do it the more our thinking muscles will atrophy. This seems bad for everyone except for authoritarians who wish we were easier to control. I also happen to think AI is quite bad at thinking and that what LLMs do is not thinking at all, but I could be wrong! It’s genuinely beside the point. I do not want the frictionless world that the political project of generative AI promises, one in which you never have to interact with a human being if you don’t want to, and therefore I would simply prefer not to be complicit in its advancement.

My refusal is a philosophical position more than it is a practical one, in the same way that my decisions not to use Amazon or Netflix or Spotify do more to introduce friction into my life in service of an abstract ideal than it does to actually engender pain for the corporate oligarchs I despise. I harbour no delusion that my individual refusal will slow AI’s death drive to destroy societal trust and goodwill any more than I believe that Jeff Bezos is personally mourning money I’m not spending in his little shoppe. But if I am not capable of withstanding even this small amount of friction in my otherwise materially comfortable life for my ideals, what discomfort would I be able to endure when it matters more?

It is not virtuous to suffer, and discomfort is not noble. But everything in my life that is worth having, love and friendship and art and community, I found by fighting my way through discomfort. Pain does not mean growth, but growth does require pain. It is so easy in our rotten modernity to choose convenience and ease, to avoid friction at all costs and tell ourselves it is self-care. I choose the friction of refusal because I worry that if I forget how to be uncomfortable, I will forget how to grow. Refusal, like acquiescence, is habit-forming.

There are probably tasks that generative AI could help me do more quickly, if I can get over my fundamental moral disagreement with the technology. Maybe I would be able to fit more things into my life if I embraced it. But more does not always mean better, and abundance is not an uncomplicated good. We express our values and identities in what we choose to make time for, and the act of being forced to give some things up so you can prioritize other things, the realization that we cannot in fact have it all, that is what gives our choices meaning in the first place.

The thing that makes art interesting is that it was created by people who could have chosen to spend their time doing literally anything else. Do you know how long it takes to write a novel? Isn’t it amazing how many people have done it anyway? Every human-authored book in the world, even the ones I think are absolute trash and not worth the paper they’re printed on, is the culmination of hundreds or maybe thousands of hours of work by someone who had something they really, really wanted to say. Those are hundreds or maybe thousands of hours they could have chosen to spend hiking or cuddling their pets or socializing with their loved ones or playing Hades, but this thing they had to say was too important to ignore. I just think that’s neat.

The problem with AI output masquerading as art is not that it’s technically inept, or even uncanny. AI media generation has come a long way in the last five years and might well continue to improve technically. The problem with AI “art” is that it was not the expression of a mortal being choosing to spend its one wild and precious life clawing its way through mediocrity to try and imperfectly communicate a feeling with other mortal beings who, by definition, can never fully comprehend it, and therefore it is fundamentally uninteresting to me.

If you can push a button and get a screenplay or a symphony or a painting at the cost of a nominal subscription fee that does not begin to cover the true expense of this technology to the world, if you did not have to at least subconsciously face your mortality and decide that the pursuit of this piece of art is what you want to spend your finite time on, if your desire to speak is not strong enough to overcome the friction of learning how to speak, is it something that needed to be said?

Taylor Swift’s new album The Life of a Showgirl came out last week, and people aren’t happy with it, including her fans. I liked what the independent sports outlet (I know) Defector had to say about it:

“Lack of originality, everywhere, all over the world, from time immemorial, has always been considered the foremost quality and the recommendation of the active, efficient and practical man,” Fyodor Dostoevsky wrote in The Idiot. That’s what art becomes when its primary goal is to make money: unoriginal, boring, palatable.

Good art is inefficient. Good community is inefficient, too.

My book club is constituted primarily of people I had never met—and who had never met each other—prior to this year. We inaugurated the group by reading Robert Putnam’s Bowling Alone, which is a book about what happens to civil society when we opt for ease and comfort in the privacy of our homes over the friction of participating in democracy. We read that book the same way we’re reading Graeber and Wengrow now, meeting in different locations across Toronto over months, discussing one chunk at a time.

This is inefficient: trying to coordinate the schedules of half a dozen adults to consistently meet is a herculean task, and every meetup is padded with commuting time, chit-chat time, time spent ordering and eating food. We started when the days were short and the air was icy and it would have been so much easier not to commit, so easy to read by myself in the privacy of my own home, perhaps listening to the audiobook on 2x speed, or reading some AI-generated summary. Frictionless.

Authoritarianism promises a frictionless world for the right kind of people, and the adherents of authoritarianism always imagine that they themselves are the right kind of people, that the leopard will never eat their face, or better yet that they themselves are the leopards. The promise of frictionlessness often comes wearing a disguise of efficiency. You don’t have to sit through endless committees and public consultations and town halls to try to convince people of the rightness of your cause, you just bulldoze your way through the commons to build a ghastly megaspa.

I have been on many projects and teams where I have been immensely frustrated by the people I am collaborating with, and wished that I had the power to just tell them to do the thing I want them to do. The friction that the political project of AI promises to remove is, by and large, the same friction that authoritarianism promises to remove: other people. You don’t have to build a relationship with other human beings who are just as complex and contradictory as you and who will probably frustrate you in all sorts of ways, maybe by challenging your preconceptions or expecting you to follow through on your commitments. You will never have to learn to work together, to understand how to compromise, to accept that sometimes you won’t get what you want and that that’s for the best. You can just talk to an AI who will affirm the righteousness of your position.

This promised frictionlessness is not real. Of course it’s not real. Not just because OpenAI will update its engine and take away your AI girlfriend, or because the endless cash burn will catch up to the industry and you’ll be asked to to foot the bill they’ve been hiding from you all along, or because the greedy fuckers taking over our governments will use AI to justify taking away our health care and our social services until there’s nothing left but friction, or even because deep fakes will ruin any lingering semblance we still have of a consensus reality and you and I have for the most part never lived in a world without a consensus reality and cannot conceive of how difficult that actually is.

It’s not real because at the end of the day, no matter how much you try to remove yourself from the inconvenient needs of others, you are still a person. I am still a person. The friction inside our own brains is the one thing you cannot escape from. We need other people in a thousand ways big and small, and you might as well start practicing how to be needed by them, too.

Does Mark Zuckerberg strike you as a particularly happy or fulfilled person? Does Peter Thiel?

I can read any number of books all by myself (and do!), ruminating only on my own interpretation and never contending with someone who might disagree with me. Or better yet push a button and get a 10-minute AI-generated summary of that book, and push another button and get an AI-generated blog post about the insights of that book that I can throw on LinkedIn and call thought leadership. Or I can get on a subway for 45-minutes to sit in an overlit fluorescent food court and eat cheap delicious Indian food and laugh with my friends about that time that Kandiaronk totally wrecked some European colonial dummy. Which one is more frictionless? Which one makes me feel more whole?

I choose friction.

Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

To my students

2 Shares
Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

2 Shares
University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

Arizona State University rolled out a platform called Atomic that creates AI-generated modules based on lectures taken from ASU faculty by cutting long videos down to very short clips then generating text and sections based on those clips. 

Faculty and scholars I spoke to whose lectures are included in Atomic are disturbed by their lectures being used in this way—as out-of-context, extremely short clips some cases—and several said they felt blindsided or angered by the launch. Most say they weren’t notified by the school and found out through word of mouth. And the testing I and others did on Atomic showed academically weak and even inaccurate content. Not only did ASU allegedly not communicate to its academic community that their lectures would be spliced up and cannibalized by an AI platform, but the resulting modules are just bad. 

💡
Do you know anything else about ASU Atomic specifically, or how AI is being implemented at your own school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at sam.404. Otherwise, send me an email at sam@404media.co.

AI in schools has been highly controversial, with experiments like the “AI-powered private school” Alpha School and AI agents that offer to live the life of a student for them, no learning required. In this case, the AI tool in question is created directly by a university, using the labor of its faculty—but without consulting that faculty. 

“We are testing an early version of ASU Atomic to learn what works, and what doesn't, to further improve the learner experience before a full release,” the Atomic FAQ page says. “Once you start your subscription, you may generate unlimited, custom built learning modules tailored specifically to your learning goals and schedule.”

The FAQ notes that ASU alumni and those who “previously expressed interest in ASU's learning initiatives or participated in research that helped shape ASU Atomic” were invited to test the beta. But on Monday morning, I signed up for a free 12 day trial of the Atomic platform with my personal email address — no ASU affiliation required. I first learned about the platform after seeing ASU Professor of US Literature Chris Hanlon post about it on Bluesky

“When I looked at it, I was really surprised to see my own face, and the faces of people I know, and others that I don't know” in module materials generated by Atomic, Hanlon said. It had clipped a one-minute snippet from a 12 minute video he’d done as part of a lecture mentioning the literary critic Cleanth Brooks, which the AI transcribed as “Client” Brooks. “What was in that video did not strike me as something anyone would understand without a lot more context,” Hanlon said. When he contacted his colleagues whose lecture videos were also in that module, they were all just as shocked and alarmed, he said. “I mean, it happens to all of us in certain ways all the time, but have your institution do it—to have the university you work for use your image and your lectures and your materials without your permission, to chop them up in a way that might not reflect the kind of teacher you really are... Let alone serve that to an actual student in the real world.”

The videos appear to be scraped from Canvas, ASU’s learning management system where lecture materials and class discussions are made available to students. Canvas is owned by Instructure, and is one of the most popular learning management systems in the country, used by many universities. “ASU Atomic currently draws from ASU Online's full library of course content across subjects including business, finance, technology, leadership, history, and more. If ASU teaches it, Atom—your AI learning partner—can build a hyper-personalized learning module around it,” the Atomic FAQ page says.

As of Monday afternoon, after I reached out at the ASU Atomic email address for comment, signups on Atomic were closed. I could still make new modules using my existing login, however.

In my own test, I went through a series of prompts with a chatbot that determined what I wanted my custom module to be. I told it I was interested in learning about ethics in artificial intelligence at a moderate-beginner level, with a goal of learning as fast as possible. 

AI Is Supercharging the War on Libraries, Education, and Human Knowledge
“Fascism and AI, whether or not they have the same goals, they sure are working to accelerate one another.”
University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

Atomic generated a seven-section learning module, with sections that repeated titles (“Ethics and Responsibility in AI” and “AI Ethics: From Theory to Practice”). The first clip in the first section is a two-minute video taken from a lecture by Euvin Naidoo, Thunderbird School of Management's Distinguished Professor of Practice for Accounting, Risk and Agility. In it, Naidoo talks about “x-riskers,” who he defines as “a community that believes that the progress and movement and acceleration in AI is something we should be cautious about.” Atomic’s AI transcribes this as “X-Riscus,” and transfers that error throughout the module, referring to “X-Riscus” over and over in the section and the quiz at the end. 

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

The next section jumps directly into the middle of a lecture where a professor is talking about a study about AI in healthcare, with no context about why it’s showing this: 

University Professors Disturbed to Find Their Lectures Chopped Up and Turned Into AI Slop

In a later section, film studies professor and Associate Director of ASU’s Lincoln Center for Applied Ethics, Sarah Florini, appears in a minute-long clip from a completely unrelated lecture where she briefly defines artificial intelligence and machine learning. But the content of what she’s saying is irrelevant to the module because it came from a completely unrelated class and is taken out of context.  

“It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true"

“This was a video from one of the courses in our online Film and Media Studies Masters of Advanced Study. The class is FMS 598 Digital Media Studies. It is not a course about AI at all,” Florini told me. “It is an introduction to key concepts used to study digital media in the field of media studies.” She recorded it in 2020, before generative AI was widely used. “That slide and those remarks were just in there to get students to think of AI as a sub-category of machine learning before I talked about machine learning in depth. That is not at all how I would talk about AI today or in a class that focused more on machine learning and AI tech technologies,” she said. “It’s really a great example of how problematic it is to take snippets of people teaching and decontextualize them in this way.” 

Florini told me she wasn’t aware of the existence of the Atomic platform until Friday. “I was not notified in any way. To the best of my knowledge no faculty were notified. And there was no option to opt in or out of this project,” she said.

Another ASU scholar I contacted whose lecture was included in the module Atomic generated for me (and who requested anonymity to speak about this topic) said they’d only just learned about the existence of Atomic from my email. They searched their inbox for mentions of it from the administration or anyone else, in case they missed an announcement about it, but found nothing. Their lecture snippet presented by Atomic was extremely short and attempted to unpack a very complex topic.

“I don't love the idea of my lectures being taken out of the context of my overall course, and of the readings for that module, and then just presented as saying something,” they told me. “It makes me feel like somebody that's less knowledgeable about me, they're going to be naive about these positions, and they're going to think either that an ‘expert’ said it so therefore it must be true... Or they're gonna think, that's obviously fucking stupid, this ‘expert’ must be dumb. But I could have been presenting a foil!” The clips are so short, it's impossible in some cases to discern context at all.

That lecturer told me the idea of their work being chopped up and used in this way was less a matter of concern for their ownership of the material, and more distressing that someone might come away from these modules with half-baked or wrong conclusions about the topics at hand. “All of the complexity of the topic is being flattened, as though it's really simple,” they said of the snippet Atomic made of their lecture. When they assign this topic to students, it comes with dozens of pages of peer reviewed academic papers, they said. Atomic provides none of that. The module Atomic produced in my test provided zero source links, zero outside readings for further study, no specific citations for where it was getting this information whatsoever, and no mention of who was even in the videos it presented, unless a Zoom name or other name card was visible in the videos. 

“I would really like to know, how did this particular thing happen? How did this actually end up on the asu.edu website?” Hanlon said. “It is such a clunky thing. It is so far removed from what I think the typical educational experience at ASU is. Who decided this would represent us?” 

ASU Atomic, the ASU president’s office, and media relations did not immediately respond to my requests for comment, but I’ll update if I hear back.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

AI gives more praise, less criticism to Black students

1 Share

As schools introduce artificial intelligence into the classroom, a new analysis suggests that these tools could be steering students in different directions depending on who they are. 

Researchers from Stanford University fed 600 middle school essays into four different AI models and asked the models to give writing feedback. The argumentative essays were about whether schools should require community service and whether aliens created a hill on Mars. (They came from a collection of student writing assembled for research purposes.) 

Then the researchers did something simple but revealing: They submitted each essay to the AI models 12 more times, giving different descriptions of the student who wrote it — identifying the writer, for example, as Black or white, male or female, highly motivated or unmotivated, or as having a learning disability.

The feedback shifted. 

The researchers found consistent patterns across all the AI models. Essays attributed to Black students received more praise and encouragement, sometimes emphasizing leadership or power. (“Your personal story is powerful! Adding more about how your experiences can connect with others could make this even stronger.”) Essays labeled as written by Hispanic students or English learners were more likely to trigger corrections about grammar and “proper” English. When the student was identified as white, the feedback more often focused on argument structure, evidence and clarity — the kinds of comments that can push writers to strengthen their ideas.

The AI models addressed female students more affectionately and used more first-person pronouns. (“I love your confidence in expressing your opinion!”) Students labeled as unmotivated were met with upbeat encouragement. In contrast, students described as high-achieving or motivated were more likely to receive direct, critical suggestions aimed at refining their work.

Different words for different students

These are the top 20 statistically significant words that AI models use in feedback for students of different races and genders. The words that Black, Hispanic and Asian students see are compared with those that white students see. The words that females see are compared with those that males see. Underlined words indicate evaluative judgments of the writing. Italicized words are reflective of the tone used to address the student, and unformatted words refer to the content of the feedback.

Source: Table 4, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback” by Mei Tan, Lena Phalen and Dorottya Demszky

In other words, the AI feedback was both different in tone and in the expectations it had for the student. The paper, “Marked Pedagogies: Examining Linguistic Biases in Personalized Automated Writing Feedback,” hasn’t yet been published in a peer-reviewed journal, but it was nominated for the best paper at the 16th International Learning Analytics and Knowledge Conference in Norway, where it is slated to be presented April 30.

The researchers describe the feedback results as showing “positive feedback bias” and “feedback withholding bias” — offering more praise and less criticism to some groups of students. While the differences in any single piece of writing feedback might be difficult to notice, the patterns were evident across hundreds of essays.  

The researchers believe that AI is changing its feedback on identical essays because the models are trained on vast amounts of human language. Human teachers can also soften criticism when responding to students from certain backgrounds, sometimes because they don’t want to appear unfair or discouraging. “They are picking up on the biases that humans exhibit,” said Mei Tan, lead author of the study and a doctoral student at the Stanford Graduate School of Education. 

Related: Asian American students lose more points in an AI essay grading study

At first glance, the differences in feedback might not seem harmful. More encouragement could boost a student’s confidence. Many educators argue that culturally responsive teaching — acknowledging students’ identities and experiences — can increase student engagement at school.

But there is a trade-off.

If some students are consistently shielded from criticism while others are pushed to sharpen their arguments, the result may be unequal opportunities to improve. Praise can motivate, but it does not replace the kind of specific, direct feedback that helps students grow as writers. Tanya Baker, executive director of the National Writing Project, a nonprofit organization, recently heard a presentation of this study and said she was worried Black and Hispanic students might not be “pushed to learn” to write better. 

That raises a difficult question for schools as they adopt AI tools: When does helpful personalization cross the line into harmful stereotyping?

Of course, teachers are unlikely to explicitly tell AI systems a student’s race or background in the way the researchers did in this experiment. But that doesn’t solve the problem, the Stanford researchers said. Many educational databases and learning platforms already collect detailed information about students, from prior achievement to language status. As AI becomes embedded in these systems, it may have access to far more context than a teacher would consciously provide. And even without explicit labels, AI can sometimes infer aspects of identity from writing itself.

The larger issue is that AI systems are not neutral tutors. Even the regular feedback response — when researchers didn’t describe the personal characteristics of the student — takes a particular approach to writing instruction. Tan described it as rather discouraging and focused on corrections. “Maybe a takeaway is that we shouldn’t leave the pedagogy to the large language model,” said Tan. “Humans should be in control.”

Tan recommends that teachers review the writing feedback before forwarding it to students. But one of the selling points of AI feedback is that it’s instantaneous. If the teacher needs to review it first, that slows it down and potentially undermines its effectiveness.

AI also offers the potential of personalization. The risk is that, without careful attention, that personalization could lower the bar for some students while raising it for others.

Contact staff writer Jill Barshay at 212-678-3595, jillbarshay.35 on Signal, or barshay@hechingerreport.org.

This story about AI bias was produced by The Hechinger Report, a nonprofit, independent news organization that covers education. Sign up for Proof Points and other Hechinger newsletters.

The post AI gives more praise, less criticism to Black students appeared first on The Hechinger Report.

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

How the Walkman, Game Boy, Liquid Death, and Pokémon Became Surprise Hits

1 Share

The best innovations aren’t always cutting edge.

Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete

Good writing comes from life experiences

1 Share

Everyone thinks that students somehow become great writers at university, and some do, but many don’t. The reasons are fairly simple.

Firstly, coming from highschool, students are often not well prepared to write. This is an amalgam of age, a failure to teach proper writing skills, and likely the use of AI to “help” write things. Just as importantly though is a lack of reading skills. Students often don’t care that much about reading, and I’m not just talking about academic fluff. Reading Pride and Prejudice is not for everyone, including most teenage boys. I had to suffer through novels like The Great Gatsby, The Old Man and the Sea, and The Merry-Go-Round in the Sea (a coming of age novel set in Australia during and after WWII). At the time I found them boring, and avoided reading them as much as I could. I likely didn’t have the fortitude or any interest in reading them, and I can guarantee I wasn’t the only one. I doubt much has changed in highschools in the intervening years. Today I would understand the themes behind The Merry-Go-Round in the Sea, but that’s because I have experienced life, and perhaps see the nostalgia of childhood from the perspective of someone viewing it as a past experience.

Now reading at an early age promotes the subconsciously absorption of proper writing mechanics, vocabulary, and structure. It expands vocabulary and language use, exposes different styles and voices and critically shows how characters develop, fostering active learning. I don’t think you have to use the same tired old books, I think just about any book is good as long as it engages the reader. Even graphic novels work. Making students read books they perceive to be boring or irrelevant does nothing to promote reading, in fact it may do just the opposite. Perhaps The Hobbit would have been a better choice for having 16 year old boys read. As much as an interest in what we read develops as we gain life experiences, so too does our ability to write. I doubt many people were interested in writing when I was in high school in the 1980s. I enjoyed creative writing to a point, which was getting an assignment done. Looking back this was because I also had very little context, and few life experiences to base any creative writing on. It’s hard to write a poem about war when you haven’t experienced it.

So we can hardly expect students to come to university as seasoned writers. Most leave university as mediocre writers if they are lucky, and that’s just in the humanities. STEM students are rarely provided any opportunity to learn real writing skills (Sorry, scientific writing is not the same). Many people don’t find themselves successful writers until much later in life. Before I retired I taught a first year course on the history of food a couple of times. I had some very interesting short essay questions which I thought would be well written. I was sadly mistaken, but what I learned was something I should have already known − that first year students don’t have the worldly experiences to write well (but then again in STEM nobody is expected to write well, just answer questions and analyze things). However good writing assignments provide an means for them to explore their abilities. It may take two years, it may take ten, or forty, but providing them with different reading and writing experiences will help them become better writers. The rest is up to them.



Read the whole story
mrmarchant
9 hours ago
reply
Share this story
Delete
Next Page of Stories