1252 stories
·
1 follower

On the Consumption of AI-Generated Content at Scale

1 Share
Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

All AI Videos Are Harmful

1 Share

When OpenAI released the first version of Sora, I was excited. For years, I'd had this short story sitting on my hard drive, something I'd written long ago and always dreamed of bringing to life as a short film. The only problem was I didn't have the expertise to shoot a movie, and my Blender 3D skills are rusty for lack of use. But Sora promised something different. I could upload my sketches, input my script, and generate the film in my mind. The creative barrier had finally been lifted.

But reality was a bit different from the demos. No matter what I tried, my outputs never looked anything like what OpenAI showcased. The scenes were adjacent to what I wanted, not what I actually needed.

It wasn't just Sora. I tested Runway ML, experimented with Veo, and tried to keep my spending reasonable. Every model generated the same kind of thing: something that "looked good" in a superficial way. They excelled at creating cliché scenes, the kind of generic image that checks all the technical boxes. But creating something that could fit into a coherent narrative, something with intention and specificity? That was nearly impossible.

When Sora 2 launched, I started right where I left off. Maybe this time would be different. The videos are more realistic than ever, but the main problem remains unchanged. These tools aren't struggling because they can't generate scenes or dialogue, they sure can. The issue is that they generate what I've come to call "AI Videos," and that's a distinct category with its own aesthetic fingerprint.

The New Uncanny Valley

Think about how you instantly recognize certain types of content. If I described a video to you right now: fast-paced, someone talking directly to a screen with multiple jump cuts, a ring light's circular reflection visible in their eyes, their bedroom visible in the background. You would instantly say "TikTok video." The format is hard to miss these days.

AI-generated videos have developed their own unique look. There's a visual quality that marks them, a subtle wrongness that your brain picks up on even when you can't articulate exactly what's off. It's the new uncanny valley, and I feel an intense revulsion whenever I encounter it. I'm not alone in this reaction either. In my small circle of friends and colleagues, we've all developed the same instinctive aversion.

I'm starting to feel the same revulsion for YouTube shorts even when they are created by real people. The reason is, well, YouTube has been secretly using AI to alter real videos, making authentic content start to look artificially generated. You will notice people's faces look smoothed or sharpened, and that happens without the creator's knowledge or consent. The line between real and AI-generated content is blurring from both directions.

So if these videos trigger such a negative response in many viewers, where can AI-generated content actually thrive? The answer: with spammers, scammers, rage-baiters, and manipulators.

These bad actors are having a field day with AI video tools. A couple months ago, I wrote about AI Video Overviews, I speculated that Google might eventually start using AI-generated videos as enhanced search results, synthesizing information from multiple sources into custom video summaries. That remains speculative. But for harmful content? That's not speculation, it's happening right now, at scale.

The Primary Victims

The main targets are older adults. My parents and their peer group are constantly sharing AI-generated videos in their group chats and on social media. Even as I write this, my mother just sent me a video showing Denzel Washington giving life advice, entirely fabricated, of course.

There is a wide variety of content. Health misinformation, sensational fake news, videos claiming Obama has embraced Islam, an elderly Tai Chi master giving dubious health tips, Trump supposedly reversing or doubling down on positions he never actually took. The specific claims change daily, but the pattern remains constant.

These videos spread like wildfire despite our collective, repeated efforts to educate people. I've spent countless hours explaining why these videos are fake. In my community group chat, I've explained the telltale signs multiple times, like the little cloud icon with eyes that appears on AI-generated content (sora watermark). I've shared practical tips: if a video seems too good or too shocking to be true, search for the information on Google to verify it. Nothing seems to stick.

Some days, according to these videos, we're at war. Other days, entire cities have burned down, or a tsunami has devastated Los Angeles. You can't debunk faster than this information spreads.

When you find these videos on YouTube, scroll down to the comments. You'll see real people engaging seriously with fabricated content, offering heartfelt responses to synthetic personas, debating points that were never actually made by the people shown in the videos. I get phone calls from family overseas, people reach out, or share what they think is happening right at my doorstep. When I ask where they heard it? They forward a whatsapp video.

There's no easy solution to this crisis.

AI video technology has found its audience, just not the audience the marketing materials promised. These tools weren't really designed to help people like me overcome technical limitations and bring our stories to life. They were made to enable those who want to manipulate, deceive, and exploit people for engagement, profit, or ideology.

spam from community

I've tried to find legitimate, beneficial use cases for AI video generation. I've thought about educational applications, accessibility features, and experimental art projects. Maybe they exist in theory, but in practice, I keep coming back to the same conclusion.

Right now, every AI video I encounter is harmful. Every single one, without exception.

Either it's directly harmful like spreading misinformation, impersonating real people, manipulating vulnerable viewers. Or it's indirectly harmful by training us to accept a synthetic reality where nothing can be trusted and everything must be questioned. Even the "harmless" AI videos contribute to a broader erosion of trust in visual media.

This technology is devastatingly effective for the purposes bad actors have found for it. The creative barrier I hoped to overcome remains in place. But now there's a new barrier too. The barrier of trust. And that one might be much harder to rebuild.

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

Meet the artist behind Firefox’s new community-created app icon

1 Share

Last year, the Firefox team set out to test something fans requested: choosing a custom app icon. The experiment was simple. Offer a small set of options and see how people use them.

The team created early concepts, but experiment lead Gabrielle Lussier noticed something was missing. The designs were clean and functional, but none captured the playful, emotional spark people associate with Firefox. That led the team to revisit a collection of fan art shared during Firefox’s 20th anniversary, and one illustration stood out immediately: a warm, whimsical drawing of Firefox hugging the Earth by Dutch illustrator Ruud Hendriks (@heyheymomodraws). 

“I love that it is reminiscent of our original logo from 2004, but modernized and simplified. It’s also adorable! How could you not love it!” said Gabrielle.

To select the icon, open Firefox and head to Settings → General (iOS) / Customize (Android) → App Icon. 

First community-created app icon now available in Firefox

Ruud is known for the charming, joyful characters in his comic series heyheymomo, and he brings that same energy to this design. He originally created the artwork as a quick doodle for fun. Today, it is the first community-created app icon in Firefox.

In the Q&A below, Ruud shares how the sketch came to life, what inspired it, and what it means to see his work appear inside a browser he has used for years.

Can you tell us a bit about yourself and what inspired you to participate in last year’s Firefox 20th anniversary fan art challenge?

The funny thing is, I participated before the challenge was even a thing! One day, I didn’t know what to draw and somehow felt inspired by the cute little fox icon in my dock. I drew my own version as a super loose doodle, completely on a whim, in just a few minutes. I thought it came out pretty cute, so I posted it on my social media just for fun. People vibed with it, and the Mozilla social team picked it up. A few weeks later, I got a message asking if I wanted to submit it for the challenge since they really liked it. Of course I said yes!

What does Firefox mean to you personally, as a brand, a browser, or a community?

I’ve been on the internet for a long time. Firefox has been my favourite browser since forever, and I’m a bit of a creature of habit, so it’s always stuck with me. I like how lightweight and simple it is. Plus, as a visually minded person, I totally judge books by their covers — and I’ve always loved the Firefox icon. It’s so appealing that it made me want to draw it in the first place.

Momo is just one of the many icons you can select

Where did the idea for your “Firefox hugging the Earth” artwork come from?

It’s my little homage to the older Firefox logo, the one that made me a Firefox fan. The new one is very stylish, but the older one has always had a special place in my heart. My own work is usually very cutesy, with smiley faces and friendly characters, so I just drew my own version of it in that style.

This looks hand-drawn. What tools or techniques did you use to create it?

The initial five-minute doodle was just a quick sketch on my iPad using the app Procreate. Since Mozilla was interested in making it an actual icon, I later created a high-resolution, smoother version using vector art.

How did you feel when you learned your artwork would become one of the official Firefox app icons?

As a longtime Firefox fan, I was over the moon and couldn’t believe all of this came from just a silly doodle I did on a whim. I think that’s the beauty of the internet — how something small and spontaneous can take off like that. I’m really honoured, and I hope you all like my silly, little icon.

What a fan-made icon says about how we build

Ruud’s icon shows how product features can come from small, genuine ideas. His artwork delivered exactly what the team set out to explore: a bit of delight, a touch of nostalgia, and a visual style that feels true to Firefox. This project reflects how Mozilla builds. We listen, we iterate, and we look for ways to bring community creativity into the product. Ruud’s contribution shows how users and artists can shape Firefox in ways that feel both personal and unexpected.


Ruud Hendriks is an illustrator from the Netherlands, specializing in cute and whimsical characters. He has extensive experience working on children’s toys, apps, and games, and now focuses primarily on his own comic series, heyheymomo, which follows the adventures of a dog and frog who are best friends.

His work is lighthearted and designed to brighten your day, even if just for a moment. You can explore his comics on Instagram @heyheymomodraws and find prints at heyheymomo.com.

The post Meet the artist behind Firefox’s new community-created app icon  appeared first on The Mozilla Blog.

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

Butter: The Softest Flex

1 Share

How we’ve become obsessed with top-chelf butter, one flaky croissant at a time

The post Butter: The Softest Flex appeared first on TASTE.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

Common Threads

1 Share
Visualizing how musicals use motifs to tell stories.

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete

What Robots Can Learn from Classical Indian Dance

1 Share

A swan’s beak, a blossoming lotus flower, a delicate bracelet: These are some of the highly stylized forms the human hand can make in a classical Indian dance form known as Bharatanatyam. The dancers rely on a visual language of gestures, known as mudras—which involve precise finger, wrist, and palm movements—together with dramatic facial expressions and angular body movements to knit together powerful, ancient stories and express heightened emotion.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Now, a team of scientists is using this language of dancing hands to train robots, which could ultimately lead to better prosthetics design, robot-assisted therapy for stroke survivors, and robots who can perform complex manual tasks such as folding laundry or perhaps even playing a musical instrument.

The mudras, the scientists found, can be broken down into an alphabet of six basic building blocks that together make a good proxy for most hand movements. Instead of teaching robots to mimic natural human gestures, such as pinching or cupping, the scientists are developing methods to teach the machines these artistic forms, first. The researchers published their results in Scientific Reports.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Read more: “Robots Can’t Dance

“We noticed dancers tend to age super-gracefully,” says Ramana Vinjamuri, an electrical engineer at the University of Maryland, Baltimore County, and a co-author of the new research, in a statement. “That was a huge inspiration for us when we started looking for richer alphabets of movement. With dance, we are looking not just at healthy movement, but super healthy. And so the question became, could we find a ‘superhuman’ alphabet from the dance gestures?”

Vinjamuri began hunting for this so-called alphabet of hand movement more than a decade ago. He was inspired to take cues from Bharatanatyam dancers after a 2023 conference on the brain at the Indian Institute of Technology, where he attended a session that explored how ancient Indian traditions could be recruited to help solve modern problems. While brainstorming, he hit upon the idea of looking to mudras to improve robot hand mobility.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Vinjamuri and his colleagues started out by analyzing 30 natural hand grasps used to pick up objects, ranging from a tiny bead to a water bottle. They found six basic building blocks that when combined could be used to describe 99 percent of the gestures. Then they ran the same test with the mudras, and found a different set of six building blocks that could describe 94 percent of the mudras. But when they used these different sets of alphabets to analyze an unrelated set of gestures—15 letters from the American Sign Language alphabet—the mudras-derived alphabet won the contest, according to the scientists.

“The mudras-derived alphabet is definitely better than the natural grasp alphabet because there is more dexterity and more flexibility,” said Vinjamuri.

The researchers are now training a standalone robotic hand and a humanoid robot to use this mudras alphabet. Ultimately, the goal is to make the robot hand more human-like. What looks like art, the robots are learning as the grammar of biology and movement.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Enjoying  Nautilus? Subscribe to our free newsletter.

Lead image: Monstar Studio / Shutterstock

Read the whole story
mrmarchant
5 hours ago
reply
Share this story
Delete
Next Page of Stories