1323 stories
·
1 follower

Photographer Awarded Compensation After News Website Uses His Images, But Owners are Untraceable

1 Share

Close-up of hands typing on a laptop keyboard with a professional camera and a memory card on the wooden table in the foreground.

A photographer won a legal ruling for compensation after a news website used his images, but tracking down the site’s owners and editors has proved impossible.

[Read More]

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

How to be less awkward

1 Share
photo cred: my dad

Here’s the most replicated finding to come out of my area of psychology in the past decade: most people believe they suffer from a chronic case of awkwardness.

Study after study finds that people expect their conversations to go poorly, when in fact those conversations usually go pretty well. People assign themselves the majority of the blame for any awkward silences that arise, and they believe that they like other people more than other people like them in return. I’ve replicated this effect myself: I once ran a study where participants talked in groups of three, and then they reported/guessed how much each person liked each other person in the conversation. Those participants believed, on average, that they were the least liked person in the trio.

In another study, participants were asked to rate their skills on 20 everyday activities, and they scored themselves as better than average on 19 of them. When it came to cooking, cleaning, shopping, eating, sleeping, reading, etc., participants were like, “Yeah, that’s kinda my thing.” The one exception? “Initiating and sustaining rewarding conversation at a cocktail party, dinner party, or similar social event”.

I find all this heartbreaking, because studies consistently show that the thing that makes humans the happiest is positive relationships with other humans. Awkwardness puts a persistent bit of distance between us and the good life, like being celiac in a world where every dish has a dash of gluten in it.

Even worse, nobody seems to have any solutions, nor any plans for inventing them. If you want to lose weight, buy a house, or take a trip to Tahiti, entire industries are waiting to serve you. If you have diagnosable social anxiety, your insurance might pay for you to take an antidepressant and talk to a therapist. But if you simply want to gain a bit of social grace, you’re pretty much outta luck. It’s as if we all think awkwardness is a kind of moral failing, a choice, or a congenital affliction that suggests you were naughty in a past life—at any rate, unworthy of treatment and undeserving of assistance.

We can do better. And we can start by realizing that, even though we use one word to describe it, awkwardness is not one thing. It’s got layers, like a big, ungainly onion. Three layers, to be exact. So to shrink the onion, you have to peel it from the skin to the pith, adapting your technique as you go, because removing each layer requires its own unique technique.

Before we make our initial incision, I should mention that I’m not the kind of psychologist who treats people. I’m the kind of psychologist who asks people stupid questions and then makes sweeping generalizations about them. You should take everything I say with a heaping teaspoon of salt, which will also come in handy after we’ve sliced the onion and it’s time to sauté it. That disclaimer disclaimed, let’s begin on the outside and work our way in, starting with—

1. THE OUTSIDE LAYER: SOCIAL CLUMSINESS

The outermost layer of the awkward onion is the most noticeable one: awkward people do the wrong thing at the wrong time. You try to make people laugh; you make them cringe instead. You try to compliment them; you creep them out. You open up; you scare them off. Let’s call this social clumsiness.

Being socially clumsy is like being in a role-playing game where your charisma stat is chronically too low and you can’t access the correct dialogue options. And if you understand that reference, I understand why you’re reading this post.

Here’s the bad news: I don’t think there’s a cure for clumsiness. Every human trait is normally distributed, so it’s inevitable that some chunk of humanity is going to have a hard time reading emotional cues and predicting the social outcomes of their actions. I’ve seen high-functioning, socially ham-handed people try to memorize interpersonal rules the same way chess grandmasters memorize openings, but it always comes off stilted and strange. You’ll be like, “Hey, how you doing” and they’re like “ROOK TO E4, KNIGHT TO C11, QUEEN TO G6” and you’re like “uhhh cool man me too”.

Here’s the good news, though: even if you can’t cure social clumsiness, there is a way to manage its symptoms. To show you how, let me tell you a story of a stupid thing I did, and what I should have done instead.

Once, in high school, I was in my bedroom when I saw a girl in my class drive up to the intersection outside my house. It was dark outside and I had the light on, and so when she looked up, she caught me in the mortifying act of, I guess, existing inside my home? This felt excruciatingly embarrassing, for some reason, and so I immediately dropped to the floor, as if I was in a platoon of GIs and someone had just shouted “SNIPER!” But breaking line of sight doesn’t cause someone to unsee you, and so from this girl’s point of view, she had just locked eyes with some dude from school through a window and his response had been to duck and cover. She told her friends about this, and they all made fun of me ruthlessly.

I learned an important lesson that day: when it comes to being awkward, the coverup is always worse than the crime. If you just did something embarrassing mere moments ago, it’s unlikely that you have suddenly become socially omnipotent and that all of your subsequent moves are guaranteed to be prudent and effective. It’s more likely that you’re panicking, and so your next action is going to be even stupider than your last.

And that, I think, is the key to mitigating your social clumsiness: give up on the coverups. When you miss a cue or make a faux pas, you just have to own it. Apologize if necessary, make amends, explain yourself, but do not attempt to undo your blunder with another round of blundering. If you knock over a stack of porcelain plates, don’t try to quickly sweep up the shards before anyone notices; you will merely knock over a shelf of water pitchers.

This turns out to be a surprisingly high-status move, because when you readily admit your mistakes, you imply that you don’t expect to be seriously harmed by them, and this makes you seem intimidating and cool. You know how when a toddler topples over, they’ll immediately look at you to gauge how upset they should be? Adults do that too. Whenever someone does something unexpected, we check their reaction—if they look embarrassed, then whatever they did must be embarrassing. When that person panics, they look like a putz. When they shrug and go, “Classic me!”, they come off as a lovable doof, or even, somehow, a chill, confident person.

In fact, the most successful socially clumsy people I know can manage their mistakes before they even happen. They simply own up to their difficulties and ask people to meet them halfway, saying things like:

Thanks for inviting me over to your house. It’s hard for me to tell when people want to stop hanging out with me, so please just tell me when you’d like me to leave. I won’t be mad. If it’s weird to you, I’m sorry about that. I promise it’s not weird to me.

It takes me a while to trust people who attempt this kind of social maneuver—they can’t be serious, can they? But once I’m convinced they’re earnest, knowing someone’s social deficits feels no different than knowing their dietary restrictions (“Arthur can’t eat artichokes; Maya doesn’t understand sarcasm”), and we get along swimmingly. Such a person is always going to seem a bit like a Martian, but that’s fine, because they are a bit of a Martian, and there’s nothing wrong with being from outer space as long as you’re upfront about it.

2. THE MIDDLE LAYER: EXCESSIVE SELF-AWARENESS

When we describe someone else as awkward, we’re referring to the things they do. But when we describe ourselves as awkward, we’re also referring to this whole awkward world inside our heads, this constant sensation that you’re not slotted in, that you’re being weird, somehow. It’s that nagging thought of “does my sweater look bad” that blossoms into “oh god, everyone is staring at my horrible sweater” and finally arrives at “I need to throw this sweater into a dumpster immediately, preferably with me wearing it”.

This is the second layer of the awkward onion, one that we can call excessive self-awareness. Whether you’re socially clumsy or not, you can certainly worry that you are, and you can try to prevent any gaffes from happening by paying extremely close attention to yourself at all times. This strategy always backfires because it causes a syndrome that athletes call “choking” or “the yips”—that stilted, clunky movement you get when you pay too much attention to something that’s supposed to be done without thinking. As the old poem goes:

A centipede was happy – quite!

Until a toad in fun

Said, “Pray, which leg moves after which?”

This raised her doubts to such a pitch,

She fell exhausted in the ditch

Not knowing how to run.

The solution to excessive self-awareness is to turn your attention outward instead of inward. You cannot out-shout your inner critic; you have to drown it out with another voice entirely. Luckily, there are other voices around you all the time, emanating from other humans. The more you pay attention to what they’re doing and saying, the less attention you have left to lavish on yourself.

You can call this mindfulness if that makes it more useful to you, but I don’t mean it as a sort of droopy-eyed, slack-jawed, I-am-one-with-the-universe state of enlightenment. What I mean is: look around you! Human beings are the most entertaining organisms on the planet. See their strange activities and their odd proclivities, their opinions and their words and their what-have-you. This one is riding a unicycle! That one is picking their nose and hoping no one notices! You’re telling me that you’d rather think about yourself all the time?

Getting out of your own head and into someone else’s can be surprisingly rewarding for all involved. It’s hard to maintain both an internal and an external dialogue simultaneously, and so when your self-focus is going full-blast, your conversations degenerate into a series of false starts (“So...how many cousins do you have?” “Seven.” “Ah, a prime number.”) Meanwhile, the other person stays buttoned up because, well, why would you disrobe for someone who isn’t even looking? Paying attention to a human, on the other hand, is like watering a plant: it makes them bloom. People love it when you listen and respond to them, just like babies love it when they turn a crank and Elmo pops out of a box—oh! The joy of having an effect on the world!

Me when someone asks me an open-ended question about myself (source)

Of course, you might not like everyone that you attend to. When people start blooming in your presence, you’ll discover that some of them make you sneeze, and some of them smell like that kind of plant that gives off the stench of rotten eggs. But this is still progress, because in the Great Hierarchy of Subjective Experiences, annoyance is better than awkwardness—you can walk away from an annoyance, but awkwardness comes with you wherever you go.

It can be helpful to develop a distaste for your own excessive self-focus, and one way to do that is to relabel it as “narcissism”. We usually picture narcissists as people with an inflated sense of self worth, and of course many narcissists are like that. But I contend that there is a negative form of narcissism, one where you pay yourself an extravagant amount of attention that just happens to come in the form of scorn. Ultimately, self-love and self-hate are both forms of self-obsession.

So if you find yourself fixated on your own flaws, perhaps its worth asking: what makes you so worthy of your own attention, even if it’s mainly disapproving? Why should you be the protagonist of every social encounter? If you’re really as bad as you say, why not stop thinking about yourself so much and give someone else a turn?

3. THE INNER LAYER: PEOPLE-PHOBIA

Social clumsiness is the thing that we fear doing, and excessive self-focus is the strategy we use to prevent that fear from becoming real, but neither of them is the fear itself, the fear of being left out, called out, ridiculed, or rejected. “Social anxiety” is already taken, so let’s refer to this center of the awkward onion as people-phobia.

People-phobia is both different from and worse than all other phobias, because the thing that scares the bajeezus out of you is also the thing you love the most. Arachnophobes don’t have to work for, ride buses full of, or go on first dates with spiders. But people-phobes must find a way to survive in a world that’s chockablock with homo sapiens, and so they yo-yo between the torment of trying to approach other people and the agony of trying to avoid them.

At the heart of people-phobia are two big truths and one big lie. The two big truths: our social connections do matter a lot, and social ruptures do cause a lot of pain. Individual humans cannot survive long on their own, and so evolution endowed us with a healthy fear of getting voted off the island. That’s why it hurts so bad to get bullied, dumped, pantsed, and demoted, even though none of those things cause actual tissue damage.1

But here’s the big lie: people-phobes implicitly believe that hurt can never be healed, so it must be avoided at all costs. This fear is misguided because the mind can, in fact, mend itself. Just like we have a physical immune system that repairs injuries to the body, we also have a psychological immune system that repairs injuries to the ego. Black eyes, stubbed toes, and twisted ankles tend to heal themselves on their own, and so do slip-ups, mishaps, and faux pas.

That means you can cure people-phobia the same way you cure any fear—by facing it, feeling it, and forgetting it. That’s the logic behind exposure and response prevention: you sit in the presence of the scary thing without deploying your usual coping mechanisms (scrolling on your phone, fleeing, etc.) and you do this until you get tired of being scared. If you’re an arachnophobe, for instance, you peer at a spider from a safe distance, you wait until your heart rate returns to normal, you take one step closer, and you repeat until you’re so close to the spider that it agrees to officiate your wedding.2

Unfortunately, people-phobia is harder to treat than arachnophobia because people, unlike spiders, cannot be placed in a terrarium and kept safely on the other side of the room. There is no zero-risk social interaction—anyone, at any time, can decide that they don’t like you. That’s why your people-phobia does not go into spontaneous remission from continued contact with humanity: if you don’t confront your fear in a way that ultimately renders it dull, you’re simply stoking the phobia rather than extinguishing it.3

Exposure only works for people-phobia, then, if you’re able to do two things: notch some pleasant interactions and reflect on them afterward. The notching might sound harder than the reflecting, but the evidence suggests it’s actually the other way around. Most people have mostly good interactions most of the time. They just don’t notice.

In any study I’ve ever read and in every study I’ve ever conducted myself, when you ask people to report on their conversation right after the fact, they go, “Oh, it was pretty good!”. In one study, I put strangers in an empty room and told them to talk about whatever they want for as long as they want, which sounds like the social equivalent of being told to go walk on hot coals or stick needles in your eyes. And yet, surprisingly, most of those participants reported having a perfectly enjoyable, not-very-awkward time. When I asked another group of participants to think back to their most recent conversation (which were overwhelmingly with friends and family, rather than strangers), I found the same pattern of results4:

Error bars = 95% confidence intervals. Big dots = means. Small, transparent dots = individual data points.

But when you ask people to predict their next conversation, they suddenly change their tune. I had another group of participants guess how this whole “meet a stranger in the lab, have an open-ended conversation” thing would go, and they were not optimistic. Participants estimated that only 50% of conversations would make it past five minutes (actually, 87% did), and that only 15% of conversations would go all the way to the time limit of 45 minutes (actually, 31% did). So when people meet someone new, they go, “that was pretty good!”, but when they imagine meeting someone new, they go, “that will be pretty bad!”

A first-line remedy for people-phobia, then, is to rub your nose in the pleasantness of your everyday interactions. If you’re afraid that your goof-ups will doom you to a lifetime of solitude and then that just...doesn’t happen, perhaps it’s worth reflecting on that fact until your expectations update to match your experiences. Do that enough, and maybe your worries will start to appear not only false, but also tedious. However, if reflecting on the contents of your conversations makes you feel like that guy in Indiana Jones who gets his face melted off when he looks directly at the Ark of the Covenant, then I’m afraid you’re going to need bigger guns than can fit into a blog post.

A sign you should switch strategies (source)

GOODNIGHT AND GOOD DUCK

Obviously, I don’t think you can instantly de-awkward yourself by reading the right words in the right order. We’re trying to override automatic responses and perform laser removal on burned-in fears—this stuff takes time.

In the meantime, though, there’s something all of us can do right away: we can disarm. The greatest delusion of the awkward person is that they can never harm someone else; they can only be harmed. But every social hangup we have was hung there by someone else, probably by someone who didn’t realize they were hanging it, maybe by someone who didn’t even realize they were capable of such a thing. When Todd Posner told me in college that I have a big nose, did he realize he was giving me a lifelong complex? No, he probably went right back to thinking about his own embarrassingly girthy neck, which, combined with his penchant for wearing suits, caused people to refer to him behind his back as “Business Frog” (a fact I kept nobly to myself).

So even if you can’t rid yourself of your own awkward onion, you can at least refrain from fertilizing anyone else’s. This requires some virtuous sacrifice, because the most tempting way to cope with awkwardness is to pass it on—if you’re pointing and laughing at someone else, it’s hard for anyone to point and laugh at you. But every time you accept the opportunity to be cruel, you increase the ambient level of cruelty in the world, which makes all of us more likely to end up on the wrong end of a pointed finger.

All of that is to say: if you happen to stop at an intersection and you look up and see someone you know just standing there inside his house he immediately ducks out of sight, you can think to yourself, “There are many reasonable explanations for such behavior—perhaps he just saw a dime on the floor and bent down to get it!” and you can forget about the whole ordeal and, most importantly, keep your damn eyes on the road.

Experimental History reminds you to don appropriate eyewear before looking directly at the Ark of the Covenant

PS: This post pairs well with Good Conversations Have Lots of Doorknobs.

1

Psychologists who study social exclusion love to use this horrible experimental procedure called “Cyberball”, where you play a game of virtual catch with two other participants. Everything goes normally at first, but then the other participants inexplicably start throwing the ball only to each other, excluding you entirely. (In reality, there are no other participants; this is all pre-programmed.) When you do this to someone who’s in an fMRI scanner, you can see that getting ignored in Cyberball lights up the same part of the brain that processes physical pain. But you don’t need a big magnet to find this effect: just watching the little avatars ignore you while tossing the ball back and forth between them will immediately make you feel awful.

2

My PhD cohort included some clinical psychologists who interned at an OCD treatment center as part of their training. Some patients there had extreme fears about wanting to harm other people—they didn’t actually want to hurt anybody, but they were afraid that they did. So part of their treatment was being given the opportunity to cause harm, and to realize that they weren’t really going to do it. At the final stage of this treatment, patients are given a knife and told to hold it at their therapist’s throat, who says, “See? Nothing bad is happening.” Apparently this procedure is super effective and no one at the clinic had ever been harmed doing it, but please do not try this at home.

3

As this Reddit thread so poetically puts it, “you have to do exposure therapy right otherwise you’re not doing exposure therapy, you’re doing trauma.”

4

You might notice that while awkwardness ratings are higher when people talk to strangers vs. loved ones, enjoyment ratings are higher too. What gives? One possibility is that people are “on” when they meet someone new, and that’s a surprisingly enjoyable state to be in. That’s consistent with this study from 2010, which found that:

  1. Participants actually had a better time talking to a stranger than they did talking to their romantic partner.

  2. When they were told to “try to make a good impression” while talking to their romantic partner (“Don’t role-play, or pretend you are somewhere where you are not, but simply try to put your best face forward”), they had a better time than when they were given no such instructions.

  3. Participants failed to predict both of these effects.

Like most psychology studies published around this time, the sample sizes and effects are not huge, so I wouldn’t be surprised if you re-ran this study and found no effect. But even if people enjoyed talking to strangers as much as they enjoy talking to their boyfriends and girlfriends, that would still be pretty surprising.

Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete

A Mostly Helpful Guide to Running Your First Ultra Trail Race

1 Share
Some tips, advice, and encouragement for those thinking about doing a long and rewarding trail race.

Source

Read the whole story
mrmarchant
20 hours ago
reply
Share this story
Delete

You should start a blog:

1 Share

Writing something down forces you to fully understand it. When the idea is on paper, you can see all the missing assumptions and leaps in logic. It’s common to start writing, do some research and find out that your original point was wrong.

This is a good thing.

You are now less wrong then you were before, and have something you can share so that we can all be less wrong.

Even if you don’t learn anything new, we don’t find our interests and hobbies by magic: We read about them from someone else. Simply writing about what you did yesterday — even if you are not an expert on the topic — can be very valuable to the right person.

Why a blog:

Blogs have a long shelf-life: Social media posts vanish in hours, but a good blog post can stay readable and relevant for decades. Your work can have a lasting impact on lots of people rather then being briefly noticed by a few.

You can go back and edit old posts to improve the writing or add fresh information. If you were wrong about something, you can correct it: You aren’t locked into your first impressions of the topic.

Posts can build on each other, cite sources or provide hundreds of pages worth of detail. None of these are requirements, but having the option allow actual learning and nuanced discussions.

You can write a detailed response to someone else’s detailed opinion instead of just throwing insults: meaningful human interaction that just isn’t possible on “social” media.

You can own your identity: If you register yourname.com (costs around around $10/year) you aren’t tied to any particular service. Switching hosting provides is completely seamless: None of your readers will even notice anything’s different.

Compare that to being “@yourname on Twitter”: If the platform tries to extort you, gets sold to a horrible person, or vanishes without a trace… You are out of luck. Sure, you could move to “@yourname on Mastodon” but at the price of loosing your all readers and breaking every link to your work.

On your own website, you can customize everything, and there’s no risk of a company forcing a terrible redesign on you. Instead of being yet-another-post-feed, you can have your own unique place on the internet

Posts can be indexed by Google and friends: A niche post can become the top search result and keep that spot for years. Social media does have some indexing, but it is very hit-and-miss.

You can use RSS or mailing lists to keep readers updated without having to worry if an algorithmic feed is showing your work… although you can still link to your site on social media.

The alternative…

On social media, if a post doesn’t immediately go viral, it’s effectively dead and will be burried by the algoritic feed.

The only thing that people ever see are short and anger-inducing posts: They have to be short or readers will get distracted all the other content next to them. Posts also need “engagement”, and anger is a great motivator.

Social media rewards posting short (and often misleading) content with a broad appeal. Meaningful intellectual discussions are punished: some places have character limits that prevent you from even trying.

Niche topics don’t fare any better: Unless it can find a large audience in the first few hours, it won’t be shown to anyone. When someone who is interested does come along, they often won’t be able to find it because the search feature doesn’t work.

It’s not an accident that these places are full of bad political takes and endless arguments. They make money from advertising: They want to keep the greatest number of people the site for as long as possible… even if they become horrible places to be.

Hosting:

This site is hosted on a VPS and I use a static site generator (Hugo) to generate the index pages. I’d only recommend this if you are planning to do other things on the server: Otherwise it is overkill and more expensive (3.5/month) then other options.

If you are willing to write some HTML or configure a site generator, Github Pages or Cloudflare Pages will host a site for free. Because it’s just some HTML, switching providers in the future will be painless.

There are also plenty of blog specific hosting services that will let you get by without typing a single angle bracket. However, I haven’t used any of them so I can’t give specific recommendations.

If you bring your own domain, don’t stress to much about the choice: You can always switch without effecting your readers.

Closing notes:

Your first blog post will probably be bad. That’s ok: Your second one will be better, and your third will be even better. All that matters is that you wrote something and put it out into the world. Writing is a skill: you will get better with time.

While social media can be super toxic, most people on the internet are nice: The vast majority of what I receive is praise, “Thank You”-s and polite corrections. Out of hundreds of emails, I’ve only gotten a single piece of hate-mail.

… but please: Don’t use an LLM generate posts for you. I want to interact with a human and not a robot. Even if it’s got a few spelling mistakes it’s still yours and still has value. An LLM generated post is worthless: If I wanted to read AI slop, I would have generated it myself.

If you feel inclined to ChatGPT something, don’t. Post what you would have used as the prompt: That’s the important part.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

It’s hard to justify Tahoe icons

1 Comment and 4 Shares

I was reading Macintosh Human Interface Guidelines from 1992 and found this nice illustration:

accompanied by explanation:

Fast forward to 2025. Apple releases macOS Tahoe. Main attraction? Adding unpleasant, distracting, illegible, messy, cluttered, confusing, frustrating icons (their words, not mine!) to every menu item:

Sequoia → Tahoe

It’s bad. But why exactly is it bad? Let’s delve into it!

Disclaimer: screenshots are a mix from macOS 26.1 and 26.2, taken from stock Apple apps only that come pre-installed with the system. No system settings were modified.

Icons should differentiate

The main function of an icon is to help you find what you are looking for faster.

Perhaps counter-intuitively, adding an icon to everything is exactly the wrong thing to do. To stand out, things need to be different. But if everything has an icon, nothing stands out.

The same applies to color: black-and-white icons look clean, but they don’t help you find things faster!

Microsoft used to know this:

Look how much faster you can find Save or Share in the right variant:

It also looks cleaner. Less cluttered.

A colored version would be even better (clearer separation of text from icon, faster to find):

I know you won’t like how it looks. I don’t like it either. These icons are hard to work with. You’ll have to actually design for color to look nice. But the principle stands: it is way easier to use.

Consistency between apps

If you want icons to work, they need to be consistent. I need to be able to learn what to look for.

For example, I see a “Cut” command and next to it. Okay, I think. Next time I’m looking for “Cut,” I might save some time and start looking for instead.

How is Tahoe doing on that front? I present to you: Fifty Shades of “New”:

I even collected them all together, so the absurdity of the situation is more obvious.

Granted, some of them are different operations, so they have different icons. I guess creating a smart folder is different from creating a journal entry. But this?

Or this:

Or this:

There is no excuse.

Same deal with open:

Save:

Yes. One of them is a checkmark. And they can’t even agree on the direction of an arrow!

Close:

Find (which is sometimes called Search, and sometimes Filter):

Delete (from Cut-Copy-Paste-Delete fame):

Minimize window.

These are not some obscure, unique operations. These are OS basics, these are foundational. Every app has them, and they are always in the same place. They shouldn’t look different!

Consistency inside the same app

Icons are also used in toolbars. Conceptually, operations in a toolbar are identical to operations called through the menu, and thus should use the same icons. That’s the simplest case to implement: inside the same app, often on the same screen. How hard can it be to stay consistent?

Preview:

Photos: same and mismatch, but reversed ¯\_(ツ)_/¯

Maps and others often use different symbols for zoom:

Icon reuse

Another cardinal sin is to use the same icon for different actions. Imagine: I have learned that means “New”:

Then I open an app and see. “Cool”, I think, “I already know what it means”:

Gotcha!

You’d think: okay, means quick look:

Sometimes, sure. Some other times, means “Show completed”:

Sometimes is “Import”:

Sometimes is “Updates”:

Same as with consistency, icon reuse doesn’t only happen between apps. Sometimes you see in a toolbar:

Then go to the menu in the same app and see means something else:

Sometimes identical icons meet in the same menu.

Sometimes next to each other.

Sometimes they put an entire barrage of identical icons in a row:

This doesn’t help anyone. No user will find a menu item faster or will understand the function better if all icons are the same.

The worst case of icon reuse so far has been the Photos app:

It feels like the person tasked with choosing a unique icon for every menu item just ran out of ideas.

Understandable.

Too much nuance

When looking at icons, we usually allow for slight differences in execution. That lets us, for example, understand that these technically different road signs mean the same thing:

Same applies for icons: if you draw an arrow going out of the box in one place and also an arrow and the box but at a slightly different angle, or with different stroke width, or make one filled, we will understand them as meaning the same thing.

Like, is supposed to mean something else from ? Come on!

Or two-letter As that only slightly differ in the font size:

A pencil is “Rename” but a slightly thicker pencil is “Highlight”?

Arrows that use different diagonals?

Three dots occupying ⅔ of space vs three dots occupying everything. Seriously?

Slightly darker dots?

The sheet of paper that changes meaning depending on if its corner is folded or if there are lines inside?

But the final boss are arrows. They are all different:

Supposedly, a user must become an expert at noticing how squished the circle is, if it starts top to right or bottom to right, and how far the arrow’s end goes.

Do I care? Honestly, no. I could’ve given it a shot, maybe, if Apple applied these consistently. But Apple considers and to mean the same thing in one place, and expects me to notice minute details like this in another?

Sorry, I can’t trust you. Not after everything I’ve seen.

Detalization

Icons are supposed to be easily recognizable from a distance. Every icon designer knows: small details are no-go. You can have them sometimes, maybe, for aesthetic purposes, but you can’t rely on them.

And icons in Tahoe menus are tiny. Most of them fit in a 12×12 pixel square (actual resolution is 24×24 because of Retina), and because many of them are not square, one dimension is usually even less than 12.

It’s not a lot of space to work with! Even Windows 95 had 16×16 icons. If we take the typical DPI of that era at 72 dots per inch, we get a physical icon size of 0.22 inches (5.6 mm). On a modern MacBook Pro with 254 DPI, Tahoe’s 24×24 icons are 0.09 inches (2.4 mm). Sure, 24 is bigger than 16, but in reality, these icons’ area is 4 times as small!

Simulated physical size comparison between 16×16 at 72 DPI (left) and 24×24 at 254 DPI (right)

So when I see this:

I struggle. I can tell they are different. But I definitely struggle to tell what’s being drawn.

Even zoomed in 20×, it’s still a mess:

Or here. These are three different icons:

Am I supposed to tell plus sign from sparkle here?

Some of these lines are half the pixel thicker than the other lines, and that’s supposed to be the main point:

Is this supposed to be an arrow?

A paintbrush?

Look, a tiny camera.

It even got an even tinier viewfinder, which you can almost see if you zoom in 20×:

Or here. There is a box, inside that box is a circle, and inside it is a tiny letter. i with a total height of 2 pixels:

Don’t see it?

I don’t. But it’s there...

And this is a window! It even has traffic lights! How adorable:

Remember: these are retina pixels, ¼ of a real pixel. Steve Jobs himself claimed they were invisible.

It turns out there’s a magic number right around 300 pixels per inch, that when you hold something around to 10 to 12 inches away from your eyes, is the limit of the human retina to differentiate the pixels.

And yet, Tahoe icons rely on you being able to see them.

Pixel grid

When you have so little space to work with, every pixel matters. You can make a good icon, but you have to choose your pixels very carefully.

For Tahoe icons, Apple decided to use vector fonts instead of good old-fashioned bitmaps. It saves Apple resources—draw once, use everywhere. Any size, any display resolution, any font width.

But there’re downsides: fonts are hard to position vertically, their size doesn’t map directly to pixels, stroke width doesn’t map 1-to-1 to pixel grid, etc. So, they work everywhere, but they also look blurry and mediocre everywhere:

Tahoe icon (left) and its pixel-aligned version (right).

They certainly start to work better once you give them more pixels.

iPad OS 26 vs macOS 26

or make graphics simpler. But the combination of small details and tiny icon size is deadly. So, until Apple releases MacBooks with 380+ DPI, unfortunately, we still have to care about the pixel grid.

Confusing metaphors

Icons might serve another function: to help users understand the meaning of the command.

For example, once you know the context (move window), these icons explain what’s going on faster than words:

But for this to work, the user must understand what’s drawn on the icon. It must be a familiar object with a clear translation to computer action (like Trash can → Delete), a widely used symbol, or an easy-to-understand diagram. HIG:

A rookie mistake would be to misrepresent the object. For example, this is how selection looks like:

But its icon looks like this:

Honestly, I’ve been writing this essay for a week, and I still have zero ideas why it looks like that. There’s an object that looks like this, but it’s a text block in Freeform/Preview:

It’s called character.textbox in SF Symbols:

Why did it become a metaphor for “Select all”? My best guess is it’s a mistake.

Another place uses text selection from iOS as a metaphor. On a Mac!

Some concepts have obvious or well-established metaphors. In that case, it’s a mistake not to use them. For example, bookmarks: . Apple, for some reason, went with a book:

Sometimes you already have an interface element and can use it for an icon. However, try not to confuse your users. Dots in a rectangle look like password input, not permissions:

Icon here says “Check” but the action is “Uncheck”.

Terrible mistake: icon doesn’t help, it actively confuses the user.

It’s also tempting to construct a two-level icon: an object and some sort of indicator. Like, a checkbox and a cross, meaning “Delete checkbox”:

Or a user and a checkmark, like “Check the user”:

Unfortunately, constructs like this rarely work. Users don’t build sentences from building blocks you provide; they have no desire to solve these puzzles.

Finding metaphors is hard. Nouns are easier than verbs, and menu items are mostly verbs. How does open look? Like an arrow pointing to the top right? Why?

I’m not saying there’s an obvious metaphor for “Open” Apple missed. There isn’t. But that’s the point: if you can’t find a good metaphor, using no icon is better than using a bad, confusing, or nonsensical icon.

There’s a game I like to play to test the quality of the metaphor. Remove the labels and try to guess the meaning. Give it a try:

It’s delusional to think that there’s a good icon for every action if you think hard enough. There isn’t. It’s a lost battle from the start. No amount of money or “management decisions” is going to change that. The problems are 100% self-inflicted.

All this being said, I gotta give Apple credit where credit is due. When they are good at choosing metaphors, they are good:

Symmetrical actions

A special case of a confusing metaphor is using different metaphors for actions that are direct opposites of one another. Like Undo/Redo, Open/Close, Left/Right.

It’s good when their icons use the same metaphor:

Because it saves you time and cognitive resources. Learn one, get another one for free.

Because of that, it’s a mistake not to use common metaphors for related actions:

Or here:

Another mistake is to create symmetry where there is none. “Back” and “See all”?

Some menus in Tahoe make both mistakes. E.g. lack of symmetry between Show/Hide and false symmetry between completed/subtasks:

Import not mirrored by Export but by Share:

Text in icons

HIG again:

Authors of HIG are arguing against including text as a part of an icon. So something like this:

or this:

would not fly in 1992.

I agree, but Tahoe has more serious problems: icons consisting only of text. Like this:

It’s unclear where “metaphorical, abstract icon text that is not supposed to be read literally” ends and actual text starts. They use the same font, the same color, so how am I supposed to differentiate? Icons just get in a way: A...Complete? AaFont? What does it mean?

I can maybe understand and . Dots are supposed to represent something. I can imagine thinking that led to . But ? No decorations. No effects. Just plain Abc. Really?

Text transformations

One might think that using icons to illustrate text transformations is a better idea.

Like, you look at this:

or this:

or this:

and just from the icon alone understand what will happen with the text. Icon illustrates the action.

Also, BIU are well-established in word processing, so all upside?

Not exactly. The problem is the same—text icon looks like text, not icon. Plus, these icons are excessive. What’s the point of taking the first letter and repeating it? The word “Bold” already starts with a letter “B”, it reads just as easily, so why double it? Look at it again:

It’s also repeated once more as a shortcut...

There is a better way to design this menu:

And it was known to Apple for at least 33 years.

System elements in icons

Operating system, of course, uses some visual elements for its own purposes. Like window controls, resize handles, cursors, shortcuts, etc. It would be a mistake to use those in icons.

Unfortunately, Apple fell into this trap, too. They reused arrows.

Key shortcuts:

HIG has an entire section on ellipsis specifically and how dangerous it is to use it anywhere else in the menu.

And this exact problem is in Tahoe, too.

Icons break scanning

Without icons, you can just scan the menu from top to bottom, reading only the first letters. Because they all align:

macOS Sequoia

In Tahoe, though, some menu items have icons, some don’t, and they are aligned differently:

Some items can have both checkmarks and icons, or have only one of them, or have neither, so we get situations like this:

Ugh.

Special mention

This menu deserves its own category:

Same icon for different actions. Missing the obvious metaphor. Somehow making the first one slightly smaller than the second and third. Congratulations! It got it all.

Is HIG still relevant?

I’ve been mentioning HIG a lot, and you might be wondering: is an interface manual from 1992 still relevant today? Haven’t computers changed so much that entirely new principles, designs, and idioms apply?

Yes and no. Of course, advice on how to adapt your icons to black-and-white displays is obsolete. But the principles—as long as they are good principles—still apply, because they are based on how humans work, not how computers work.

Humans don’t get a new release every year. Our memory doesn’t double. Our eyesight doesn’t become sharper. Attention works the same way it always has. Visual recognition, motor skills—all of this is exactly as it was in 1992.

So yeah, until we get a direct chip-to-brain interface, HIG will stay relevant.

Conclusion

In my opinion, Apple took on an impossible task: to add an icon to every menu item. There are just not enough good metaphors to do something like that.

But even if there were, the premise itself is questionable: if everything has an icon, it doesn’t mean users will find what they are looking for faster.

And even if the premise was solid, I still wish I could say: they did the best they could, given the goal. But that’s not true either: they did a poor job consistently applying the metaphors and designing the icons themselves.

I hope this article would be helpful in avoiding common mistakes in icon design, which Apple managed to collect all in one OS release. I love computers, I love interfaces, I love visual communication. It makes me sad seeing perfectly good knowledge already accessible 30 years ago being completely ignored or thrown away today.

On the upside: it’s not that hard anymore to design better than Apple! Let’s drink to that. Happy New year!

From SF Symbols: a smiley face calling somebody on the phone

Notes

During review of this post I was made familiar with Jim Nielsen’s article, which hits a lot of the same points as I do. I take that as a sign there’s some common truth behind our reasoning.

Also note: Safari → File menu got worse since 26.0. Used to have only 4 icons, now it’s 18!

Thanks Kevin, Ryan, and Nicki for reading drafts of this post.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

The role of AI in the death of my father

1 Share

Joseph Neal Riley, 1949-2025

First, thank you to everyone who reached out to me over the holidays to offer their condolences on my father’s passing. One touching consequence of writing a newsletter is receiving words of support from people I’ve never met in person, a connection with those who might otherwise be strangers but for having read what I’ve shared here. I am deeply appreciative for your kind words, they’ve meant a lot during my grieving.

Along those same lines, I hope you’ll forgive me if I spend one more week telling you about my dad, because as you’ll soon learn, AI played a bizarre role both in fostering our relationship and, perhaps, in hastening his death. To understand what happened will require that I share with you some intimate details about who he was and who I am, things he struggled with and the struggles between us, and how the intersection of his curiosity with this new technology proved to be one of my most vexing challenges over the last year.

So, to start: My father was trained as a neuroscientist. He received his PhD in 1977 from the University of Florida, which explains the odd fact that I was born in Gainesville, a city I’ve yet to ever visit (my parents moved away when I just a few weeks old). After he completed his post-doc in southern California, my family moved to Long Island so he could join the newly formed department of neurology at SUNY Stony Brook as an assistant professor. Over the next several years, he published around 50 research articles, predominately on the effects of sustained drug use on the brain.

Then, circa 1983, something happened that would upend the course of my family’s future. For reasons that remain shrouded in some mystery, my father stopped working, and he would spend the rest of his life on disability. I was only seven at the time, but I remember my dad was “sick,” and was making frequent visits to see specialists in New York City. But sick with what exactly, you might wonder, and me too, for all my life. Because the doctors could never quite pinpoint anything specifically wrong with him, at least physically. According to medical reports that I only found a few weeks ago, buried deep in my dad’s filing cabinet, neurological examinations suggested he possessed an extraordinarily high degree of verbal capability, but struggled with relatively simple logical tasks and problem solving, a “highly unusual combination,” as one examiner put it. Some of his doctors speculated he might have some form of encephalitis, an inflammation of the brain, but others found no evidence of that.

Whatever was going on, the upshot is that my father never worked again in his life. This, as you might imagine, was the source of considerable stress in my family, especially since my mother worked as a school librarian, which of course is not a lucrative career choice. I remember being shocked, and frankly deeply resentful, when I figured out in high school that our family of five (I have one sister and one brother), was technically living below the poverty line. I loathed this state of affairs and blamed my parents, my dad in particular, for the limitations that resulted. Later in my life, I would come to recognize that growing up in these circumstances helped shape me to become self-sufficient and independent in ways I’m now fiercely proud of. Indeed, once I started moving into the world of the rich and privileged and realized how fucked up so many people seem to be when they live a life without constraints, I even became grateful. But in my 20s and into my 30s, well, my relationship with my dad was strained, to say the least.

Although my father was unemployed for nearly all of his adult life, he never lost his intellectual curiosity, and thus he developed an unusual medley of interests to keep his mind active. One of these was the JFK assassination, he became part of the “conspiracy buff” community—we’ll save that story for another time. But another enduring area of interest was technology, where he was remarkably prescient in seeing where things were headed.

To wit: My family was the very first on the block to have a “microcomputer” in our home in the form of a Commodore 64 (and later the Amiga, which I think my dad felt genuine affection for). He also got active on what were called electronic “bulletin board systems” or BBSs, the precursor to the Internet really, where people connected their computers to their phone lines to send files back and forth at rates so painfully slow it defies modern comprehension. What’s more, back in the day the monopolistic phone companies charged exorbitant rates for making “long distance” phone calls, which meant that some BBS users, including my dad, looked for workarounds by hacking (or “phreaking”) phone codes to make free dial-up connections. This was illegal, of course, and when I was in eighth grade my dad was arrested and charged with multiple felony counts of theft by computer intrusion—we’ll save that story for another time, too.

My point is that my dad was always interested both in the brain and technology, so when AI in the form of large-language models were dropped into our world, he was absolutely fascinated—and so was I. As such, when I began my own efforts to understand how these models were doing what they were doing, he was right there alongside me for the intellectual journey. And for all my critiques of this new technology, I will forever be grateful that AI created a path for my dad and I have to have so many rich conversations over the past two years about its functioning. It’s fair to say it helped restore our relationship, and created a new bond between us, as we together tried to figure out just how similar (or not) its processes compare to human cognition. Cognitive Resonance exists in part because of these conversations, as I realized the exploration my father and I were mutually sharing might be of broader interest to the world. (A hypothesis I’m still testing.)

Which is why it’s such a strange, tragic irony that AI played a non-trivial role in the health crisis that led to my dad’s death.

As I’ve shared with you previously, about 18 months ago my father was diagnosed with lung cancer, kidney disease, and Chronic Lymphocytic Leukemia (CLL). Upon receiving the news, he quickly addressed the lung cancer via radiation treatment, and—after some false starts—was eventually able to successfully treat his kidneys as well. But the CLL, well, that’s a more complicated story, and it’s where his use of AI likely hastened his declining health, and magnified the pain he endured.

Here’s what happened: Not long after his CLL diagnosis, my dad’s oncologist recommended he start “Venetoclax-Obinutuzumab” treatment (Ven-Obi), a relatively new approach to addressing CLL that’s proven remarkably effective both at extending patient life expectancy while reducing physical suffering. I did not know his doctor was urging this, however, because my dad did not tell me or my siblings. Instead, my father became convinced that he was undergoing something called Richter’s Transformation, a rare complication of CLL that is particularly painful. There was no evidence of this, medically, but my dad nonetheless believed it was happening to him, and that as a result he should refrain from treating his CLL by Ven-Obi because it would only make things worse.

And my father believed that because that’s what Perplexity AI told him.

It was a shock when I discovered what was happening, as you might imagine. I only discovered what was going on when my father gave me access to his online medical record, allowing me to peer into his long-running correspondence with his oncologist. From that I learned that my dad had used Perplexity to self-diagnose his condition and had sent the Perplexity report, if it can be called that, along to his very perplexed and frustrated doctor. Given that I’d spent the better part of a year talking with my father about the unreliability of factual statements made by AI, you can only imagine my extreme frustration discovering that my efforts had utterly failed within my own family.

AI enthusiasts, whether in education or more broadly, will often try to cover their asses from responsibility for non-factual statements by AI models by saying, “well, you always need to check their output.” As a general matter, that’s a ludicrous claim, since the whole value proposition of these tools is to spare us cognitive effort—but in this instance, it’s exactly what I did. I contacted the doctors who led the study that Perplexity cited in support of its statement that refraining from Ven-Obi was the proper course of action for someone with Richter’s. Much to my surprise, both doctors replied straightaway, and confirmed what I already knew to be true, that Perplexity had misstated the conclusion of their research, and that my father should follow the course of treatment his oncologist was recommending.

Of course I immediately passed this information along to my dad, desperately hoping to appeal to his scientific and empirically oriented belief system. But he didn’t respond at all. I was yelling into the void. It was only after several more months passed, and after his physical condition continued to worsen dramatically, before he finally agreed to start the Ven-Obi treatment his oncologist had recommended a year prior. It didn’t seem to matter at that point, sadly. Although the treatment immediately reduced his white blood cell count, his pain endured, and culminated in his death just a few weeks ago.

I am obviously still grappling with all this, and I don’t want to overstate my case. I don’t think AI killed my father. I think it’s possible, perhaps even likely, that in a world without AI, he would still have latched on to some other piece of research to support his disposition against medical treatment, as he had deep misgivings—fear, really—about spending time in hospitals. Nonetheless, the fact remains that AI does exist in our world, and just as it can serve as fuel to those suffering manic psychosis, so too may it affirm or amplify our mistaken understanding of what’s happening to us physically and medically. (OpenAI claims to be limiting the use of ChatGPT to provide “tailored” medical advice that requires a license, but the head of its medical research team maintains “it will continue to be a great resource to help people understand legal and health information” —sure, if you say so.)

In the course of discussing AI with my dad, I am fairly certain he moved toward becoming more generally skeptical of it. Toward the end of his life, he started sending me articles and YouTube videos about the limitations of these tools. Still, I will forever wonder whether my efforts came too late, and whether he might still be with us if I’d been more effective in undermining the authoritative tone AI strikes when generating its tokens. There’s nothing I can do to change the past, of course. But I can for damn sure keep working to raise the consciousness of others.

A fire has been lit, even while my heart still hurts.

From Smith’s poetry book Good Bones (discovered via Litbowl here)

My father created a playlist he labelled “Wake,” which doubles both as testament to his great taste in music and elegy for what’s happening in America. You’ll be doing me and his memory a great honor if you give it a listen. He’d like knowing it found an audience.

Subscribe now

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories