1578 stories
·
2 followers

Working on products people hate

1 Share

I’ve worked on a lot of unpopular products.

At Zendesk I built large parts of an app marketplace that was too useful to get rid of but never polished enough to be loved. Now I work on GitHub Copilot, which many people think is crap1. In between, I had some brief periods where I worked on products that were well-loved. For instance, I fixed a bug where popular Gists would time out once they got more than thirty comments, and I had a hand in making it possible to write LaTeX mathematics directly into GitHub markdown2. But I’ve spent years working on products people hate3.

If I were a better developer, would I have worked on more products people love? No. Even granting that good software always makes a well-loved product, big-company software is made by teams, and teams are shaped by incentives. A very strong engineer can slightly improve the quality of software in their local area. But they must still write code that interacts with the rest of the company’s systems, and their code will be edited and extended by other engineers, and so on until that single engineer’s heroics is lost in the general mass of code commits. I wrote about this at length in How good engineers write bad code at big companies.

Looking back, I’m glad that people have strongly disliked some of the software I’ve built, for the same reason that I’m glad I wasn’t born into oil money. If I’d happened to work on popular applications for my whole career, I’d probably believe that that was because of my sheer talent. But in fact, you would not be able to predict the beloved and disliked products I worked on from the quality of their engineering. Some beloved features have very shaky engineering indeed, and many features that failed miserably were built like cathedrals on the inside4. Working on products people hate forces you to accept how little control individual engineers have over whether people like what they build.

In fact, a reliable engineer ought to be comfortable working on products people hate, because engineers work for the company, not for users. Of course, companies want to delight their users, since delighted users will pay them lots of money, and at least some of the time we’re lucky enough to get to do that. But sometimes they can’t: for instance, they might have to tighten previously-generous usage limits, or shut down a beloved product that can’t be funded anymore. Sometimes a product is funded just well enough to exist, but not well enough to be loved (like many enterprise-grade box-ticking features) and there’s nothing the engineers involved can do about it.

It can be emotionally difficult working on products that people hate. Reading negative feedback about things you built feels like a personal attack, even if the decisions they’re complaining about weren’t your decisions. To avoid this emotional pain, it’s tempting to make the mistake of ignoring feedback entirely, or of convincing yourself that you’re much smarter than the stupid users anyway. Another tempting mistake is to go too far in the other direction: to put yourself entirely “on the user’s side” and start pushing your boss to do the things they want, even if it’s technically (or politically) impossible. Both of these are mistakes because they abdicate your key responsibility as an engineer, which is to try and find some kind of balance between what’s sustainable for the company and what users want. That can be really hard!

There’s also a silver lining to working on disliked products, which is that people only care because they’re using them. The worst products are not hated, they are simply ignored (and if you think working on a hated product is bad, working on an ignored product is much worse). A product people hate is usually providing a fair amount of value to its users (or at least to its purchasers, in the case of enterprise software). If you’re thick-skinned enough to take the heat, you can do a lot of good in this position. Making a widely-used but annoying product slightly better is pretty high-impact, even if you’re not in a position to fix the major structural problems.

Almost every engineer will work on a product people hate. That’s just the law of averages: user sentiment waxes and wanes over time, and if your product doesn’t die a hero it will live long enough to become the villain. Given that, it’s sensible to avoid blaming the engineers who work on unpopular products. Otherwise you’ll end up blaming yourself, when it’s your turn, and miss the best chances in your career to have a real positive impact on users.


  1. We used to be broadly liked, then disliked when Cursor and Claude Code came out, and now I’m fairly sure the Copilot CLI tool is changing people’s minds again. So it goes.

  2. Although even that got some heated criticism at the time.

  3. Of course, I don’t mean “every single person hates the software”, or even “more than half of its users hate it”. I just mean that there are enough haters out there that most of what you read on the internet is complaints rather than praise.

  4. This is reason number five thousand why you can’t judge the quality of tech companies from the outside, no matter how much you might want to (see my post on “insider amnesia”).

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

Endgame for the Open Web

1 Share

You must imagine Sam Altman holding a knife to Tim Berners-Lee's throat.

It's not a pleasant image. Sir Tim is, rightly, revered as the genial father of the World Wide Web. But, all the signs are pointing to the fact that we might be in endgame for "open" as we've known it on the Internet over the last few decades.

The open web is something extraordinary: anybody can use whatever tools they have, to create content following publicly documented specifications, published using completely free and open platforms, and then share that work with anyone, anywhere in the world, without asking for permission from anyone. Think about how radical that is.

Now, from content to code, communities to culture, we can see example after example of that open web under attack. Every single aspect of the radical architecture I just described is threatened, by those who have profited most from that exact system.

Today, the good people who act as thoughtful stewards of the web infrastructure are still showing the same generosity of spirit that has created opportunity for billions of people and connected society in ways too vast to count while —not incidentally— also creating trillions of dollars of value and countless jobs around the world. But the increasingly-extremist tycoons of Big Tech have decided that that's not good enough.

Now, the centibillionaires have begun their final assault on the last, best parts of what's still open, and likely won't rest until they've either brought all of the independent and noncommercial parts of the Internet under their control, or destroyed them. Whether or not they succeed is going to be decided by decisions that we all make as a community in the coming months. Even though there have always been threats to openness on the web, the stakes have never been higher than they are this time.

Right now, too many of the players in the open ecosystem are still carrying on with business as usual, even though those tactics have been failing to stop big tech for years. I don't say this lightly: it looks to me like 2026 is the year that decides whether the open web as we know it will survive at all, and we have to fight like the threat is existential. Because it is.

What does the attack look like?

Calling this threat "existential" is a strong statement, so we should back that up with evidence. The point I want to make here is that this is a lot broader than just one or two isolated examples of trying to win in one market. What we are seeing is the application of the same market-crushing techniques that were used to displace entire industries with the rise of social media and the gig economy, now being deployed across the very open internet infrastructure that made the modern internet possible.

The big tech financiers and venture capitalists who are enabling these attacks are intimately familiar with these platforms, so they know the power and influence that they have — and are deeply experienced at dismantling any systems that have cultural or political power that they can't control. And since they have virtually infinite resources, they're able to carry out these campaigns simultaneously on as many fronts as they need to. The result is an overwhelming wave of threats. It's not a coordinated conspiracy, because it doesn't need to be; they just all have the same end goals in mind.

Some examples:

  • Publishers who still share their content openly, either completely free for their audience, as advertising-supported content, or with a limited amount of content available until they ask for some form of payment, are being absolutely hammered by ill-behaved AI bots. These bots are scouring their sites for every available bit of content, scraping all of it up to feed their LLMs, and then making summaries of that content available to users — typically without consent or compensation. The deal was always simple: search engines had permission to crawl sites because they were going to be sending users to those sites. If they're hitting your site half a million times for every one user they send to your site, all they're giving you is higher costs.
  • LLM-based AI platforms that have trained their AI models on this content gathered without consent typically have almost no links back to the original source content, and either bury or omit credits to the original site; as a result, publishers in categories like tech media have seen their traffic crater by over 50%, with some publishers seeing drops of over 90%.
  • As publishers see the danger from AI bots expand, they retreat to putting more and more content behind either password protection or payment walls or both, leaving the only publicly-accessible content to be AI-generated slop; open resources like research work, scientific analysis, and fair use of content all suffer as a result of people responding to the bad actors, since legitimate uses of open content are no longer possible. We're seeing this already as publishers block archival sites like the Internet Archive, even though we've already seen examples where the Internet Archive was the only accurate record of content that was disappeared by authoritarians in the current administration.
  • Open APIs, a building block of how developers build new experiences for users, and for how researchers understand people's behavior online, are rapidly being locked down due to abuse from LLMs, as well as the extremist CEOs not wanting anyone to understand what's happening on their platforms. The clamping down doesn't just affect coders — the people who were best poised to help monitor and translate what's been happening on platforms like Twitter have seen their work under siege, with over 60% of research projects on the platform stalled or abandoned just since Musk shut down their open API access.
  • Independent media based on open formats, like podcasts, are also under siege as platforms like Apple's podcasts move to closed infrastructure which means that content creators are now required to work with Apple's approved partners. Meanwhile, others like Spotify and Netflix leverage their dominant positions in the market to coerce creators to abandon open podcasts entirely, in favor of proprietary formats that require listeners to be on those platforms — locking in both creators and their audiences so they are stuck as they begin the enshittification process. The net result will be podcasts moving from being an open format that isn't controlled by either any one company or any manipulative algorithms, to just another closed social platform monetized by surveillance-based advertising.
  • Open source software projects, which power the vast majority of the internet's infrastructure, are now beleaguered by constant slop code submissions being made by automated AI code agents. These submissions attempt to look like legitimate open source code contributions, and end up overwhelming the largely-underpaid, mostly-volunteer maintainers of open source projects. Dozens of the most popular open source projects have either greatly limited, or even entirely closed their projects to community-based submissions from new contributors as a result. In addition to slowing down and disrupting the open source ecosystem's collaboration model, there's also collateral damage with the destruction of one of the best paths for new coders to establish their credentials, build relationships, and learn to be part of the coding community.
  • The most vital open content platforms, like Wikipedia, are under direct attack from bad-faith campaigns. Elon Musk has created Grokipedia to directly undermine Wikipedia with extremist hate content and conspiracist nonsense, by siphoning off traffic, revenues, and contributors from the site. All of this happens while launching spurious attacks on the credibility of the content on Wikipedia, which have led to such radical rhetoric around the site that gatherings of Wikipedia editors now face interruptions from armed attackers. Meanwhile, Wikipedia's human traffic has dropped significantly as AI platforms trained on its content answer users' questions without ever sending them to the site — a pattern that threatens the volunteer contributions and donations that keep it alive.
  • The open standards and specifications that underpin the Internet as we know it have always succeeded solely on the basis of there being a shared set of norms and values that make them work. In this way, they're like laws — only as strong as the society that agrees they ought to be enforced. A simple text file called robots.txt functioned for decades to describe the way that tools like search engines ought to behave when accessing content on websites, but now it is effectively dead as Big AI companies unilaterally decided to ignore more than a generation of precedent, and do whatever they want with the entirety of the web, completely without consent. Similarly, long-running efforts like Creative Commons and other community-driven attempts at creating shared declarations or definitions for content use are increasingly just ignored.
  • Open source software licenses, which used to be a bedrock of the software community because they provide a consistent way of encoding a set of principles in the form of a legal contract, are now treated as a minor obstacle which can be trivially overcome using LLMs. This means that it's possible to clone code and turn community-driven projects into commercial products without even having to credit the people who invented the original work, let alone compensating them or asking for consent. Many of these efforts are especially egregious because the reason the tools are able to perform this task is because they were trained on this open source code in the first place.

The human cost

The threat to the open web is far more profound than just some platforms that are under siege. The most egregious harm is the way that the generosity and grace of the people who keep the web open is being abused and exploited. Those people who maintain open source software? They're hardly getting rich — that's thankless, costly work, which they often choose instead of cashing in at some startup. Similarly, volunteering for Wikipedia is hardly profitable. Defining super-technical open standards takes time and patience, sometimes over a period of years, and there's no fortune or fame in it.

Creators who fight hard to stay independent are often choosing to make less money, to go without winning awards or the other trappings of big media, just in order to maintain control and authority over their content, and because they think it's the right way to connect with an audience. Publishers who've survived through year after year of attacks from tech platforms get rewarded by… getting to do it again the next year. Tim Berners-Lee is no billionaire, but none of those guys with the hundreds of billions of dollars would have all of their riches without him. And the thanks he gets from them is that they're trying to kill the beautiful gift that he gave to the world, and replace it with a tedious, extortive slop mall.

So, we're in endgame now. They see their chance to run the playbook again, and do to Wikipedians what Uber did to cab drivers, to get users addicted to closed apps like they are to social media, to force podcasters to chase an algorithm like kids on TikTok. If everyone across the open internet can gather together, and see that we're all in one fight together, and push back with the same ferocity with which we're being attacked, then we do have a shot at stopping them.

At one time, it was considered impossibly unlikely that anybody would ever create open technologies that would ever succeed in being useful for people, let alone that they would become a daily part of enabling billions of people to connect and communicate and make their lives better. So I don't think it's any more unlikely that the same communities can summon that kind of spirit again, and beat back the wealthiest people in the world, to ensure that the next generation gets to have these same amazing resources to rely on for decades to come.

Taking action

Alright, if it’s not hopeless, what are the concrete things we can do? The first thing is to directly support organizations in the fight. Either those that are at risk, or those that are protecting those at risk. You can give directly to support the Internet Archive, or volunteer to help them out. Wikipedia welcomes your donation or your community participation. The Electronic Frontier Foundation is fighting for better policy and to defend your rights on virtually all of these issues, and could use your support or provides a list of ways to volunteer or take action. The Mozilla Foundation can also use your donations and is driving change. (And full disclosure — I’m involved in pretty much all of these organizations in some capacity, ranging from volunteer to advisor to board member. That’s because I’m trying to make sure my deeds match my words!) These are the people whom I've seen, with my own eyes, stay the hand of those who would hold the knife to the necks of the open web's defenders.

Beyond just what these organizations do, though, we can remember how much the open web matters. I know from my time on the board of Stack Overflow that we got to see the rise of an incredibly generous community built around sharing information openly, under open licenses. There are very few platforms in history that helped more people have more economic mobility than the number of people who got good-paying jobs as coders as a result of the information on that site. And then we got to see the toll that extractive LLMs had when they took advantage of that community without any consideration for the impact it would have when they trained models on the generosity of that site's members without reciprocating in kind.

The good of the web only exists because of the openness of the web. They can't just keep on taking and taking without expecting people to finally draw a line and saying "enough". And interestingly, opportunities might exist where the tycoons least expect it. I saw Mike Masnick's recent piece where he argued that one of the things that might enable a resurgence of the open web might be... AI. It would seem counterintuitive to anyone who's read everything I've shared here to imagine that anything good could come of these same technologies that have caused so much harm.

But ultimately what matters is power. It is precisely because technologies like LLMs have powers that the authoritarians have rushed to try to take them over and wield them as effectively as they can. I don't think that platforms owned and operated by those bad actors can be the tools that disrupt their agenda. I do think it might be possible that the creative communities that built the web in the first place could use their same innovative spirit to build what could be, for lack of a better term, called "good AI". It’s going to take better policy, which may be impossible in the short term at the federal level in the U.S., but can certainly happen at more local levels and in the rest of the world. Though I’m skeptical about putting too much of the burden on individual users, we can certainly change culture and educate people so that more people feel empowered and motivated to choose alternatives to the big tech and big AI platforms that got us into this situation. And we can encourage harm reduction approaches for the people and institutions that are already locked into using these tools, because as we’ve seen, even small individual actions can get institutions to change course.

Ultimately I think, if given the choice, people will pick home-cooked, locally-grown, heart-felt digital meals over factory-farmed fast food technology every time.

Read the whole story
mrmarchant
4 hours ago
reply
Share this story
Delete

An appreciation for (technical) architecture

1 Share

Once upon a time I kept on meeting architects who had ended up working with the web.

I asked why. Some good answers:

  • Architects think about how people move between spaces (pages) and what that means for user experience - this was at a time when web designers often came from graphic design and drew more on single page layout
  • Architects think about negative space, and how what you put in a space shapes social behaviour – this was at a time when web before the social web
  • Architects have to work with a lot of different disciplines to make something, and all of those people believe they’re the most important person in the room, and that’s what product teams are like too - lol

I’m not an architect but some of my favourite books are about architecture.

Here are three:

Two things that architecture does have been on my mind recently: how it shapes understanding and how it shapes its own evolution.


Information architecture

It’s a rare designer who operates at both the macro of strategy and culture and organisations, and the micro of craft and taste and interactions.

Jeff Veen is one. I remember him saying to me once: "Design is about creating the right mental model for the user."

(Now clearly design is not only about that, but for the particular problem I took to Veen, he said precisely what I needed to hear to get un-stuck.)

So I love thinking about the primitives of functionality and content for the user and how they relate, such that the user can reason intuitively about what they can do with the system, and how.

And this is an interactive process: for a first time user, how do they first encounter a system and how do they way-find and learn over time?

And this is a cognitive process: mental models are abstract; what we perceive is real. So how does understanding happen?

(AI agents are using my software. Prioritise clarity over feels.)


Don Norman wrote The Design of Everyday Things (1988), much loved by web designers, and popularised “user-centred design.”

Norman also brought into design the term affordance from cognitive psychology. As coined by J J Gibson: "to perceive something is also to perceive how to approach it and what to do about it" (as previously discussed).

The best way to notice affordances is to notice where they go wrong! Norman doors:

Some doors require printed instructions to operate, while others are so poorly designed that they lead people to do the exact opposite of what they need to in order to open them. Their shapes or details may suggest that pushing should work, when in fact pulling is required (or the other way around).

Whenever you see a PUSH label stuck on as an extra, it’s papering over a Norman door.

I was delighted to encounter a Norman door irl this week.

So I’m stretching the definition of architecture here, to include this, but roll with it pls. Architecture is how things are understood.


Architecture is how things evolve – how they’re allowed to evolve.

There’s a beautiful housing estate on the top of a hill in south London.

Dawson’s Heights (1964) is shaped like an offset double wave, and looks different on the horizon from every angle and with every change of the light. Yet up-close it’s human-scale too, despite its 10 storeys.

Lead architect Kate Macintosh wanted residents to have balconies, but this was regarded as "wasting public money on unnecessary luxuries"

Knowing that they would be removed from her designs for cost-saving, she made them essential:

all the balconies on Dawson’s Heights are fire escape balconies, but they are also private balconies because the escape door is a “break glass to enter” type lock so you can securely use your balcony for whatever you like.


Technical architecture

So software architecture is also team structure - who needs to talk to who - but also how to make sure that doing something the quick and dirty thing way is also doing it the right way.

Half of software architecture is making sure that somebody can fix a bug in a hurry, add features without breaking it, and be lazy without doing the wrong thing.

I said in 2004.

I think this goes for internal software architecture and for libraries that you import.


The thing about agentic coding is that agents grind problems into dust. Give an agent a problem and a while loop and - long term - it’ll solve that problem even if it means burning a trillion tokens and re-writing down to the silicon.

Like, where’s the bottom? Why not take a plain English spec and grind in out in pure assembly every time? It would run quicker.

But we want AI agents to solve coding problems quickly and in a way that is maintainable and adaptive and composable (benefiting from improvements elsewhere), and where every addition makes the whole stack better.

So at the bottom is really great libraries that encapsulate hard problems, with great interfaces that make the “right” way the easy way for developers building apps with them. Architecture!

While I’m vibing (I call it vibing now, not coding and not vibe coding) while I’m vibing, I am looking at lines of code less than ever before, and thinking about architecture more than ever before.

I am sweating developer experience even though human developers are unlikely to ever be my audience.

How do we make libraries that agents love?


More posts tagged: multiplayer (32).

Auto-detected kinda similar posts:

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

Come at the king, you best not miss

1 Share

Column view cut its teeth on NeXT computers…

…and blossomed on early versions of Mac OS X…

…but where I thought it really shone was the first iPods:

This was perhaps the most fun you could ever have navigating a hierarchy of things; it made sense what left/​right/up/down meant in this universe, to a point you could easily build a mental model of what goes where, even if your viewport was smaller than ever.

It was also a close-to-ideal union of software and hardware, admirable in its simplicity and attention to detail. This is where Apple practiced momentum curves, haptics (via a tiny speaker, doing haptic-like clicks), and handling touch programmatically (only the first iPod had a physically rotating wheel, later replaced by stationary touch-sensitive surfaces) – all necessary to make iPhone’s eventual multi-touch so successful. And, iPhone embraced column views wholesale, for everything from the Music app (obvi), through Notes, to Settings.

Well, sometimes you don’t appreciate something until it’s taken away. Here are settings in the iOS version of Google Maps:

I am not sure why the designers chose to deviate from the standard, replacing a clear Y/X relationship with a more confusing Y/Z-that-looks-very-much-like-Y. They kept the chevrons hinting at the original orientation – and they probably had to, as vertical chevrons have a different connotation, but perhaps this was the warning sign right here not to change things.

I think the principle is, in general: if you’re reinventing something well-established, both of your reasoning and your execution have to be really, really solid. I don’t think this has happened here. (Other Google apps seem to use standard column view model.)

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

Mayonnaise is an instrument according to new study from Hellman’s

1 Share
A viral SpongeBob quote has officially crossed into academic territory, with a new study confirming that mayonnaise can function as a musical instrument.

Read the whole story
mrmarchant
23 hours ago
reply
Share this story
Delete

rssprint [re: i print my websites ...]

1 Share

yes move slow, fix things, i like to print things to read off the internet as well. i have been working on a zine maker that would print directly from my rss feed for some time now. i seem to have a functioning version: rssprint.

it is a (python, though started as a zsh script) cli utility that provides a tui showing rss feeds that you can add. you can read a long preview of the article, and select to print it out. after selecting your articles it will build am html version of your booklet/zine with a toc + it will be ready to print as a booklet. it even includes dithered b/w images! you can also select between three styles. fun!

here is the git to the rssprint project.

here are some images of my first mini-zine:

MultiView — rssprint:photos
frame 1 frame 2 frame 3
frame_001 1 / 3

maybe you could try this out move slow?

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories