1657 stories
·
2 followers

The Brennan Self-Balancing Monorail

1 Share

This is so cool: in the early 1900s, a mechanical engineer named Louis Brennan invented a self-balancing train that ran on a single track. This video demonstrates how the train worked using a clever system of gyroscopes.

This is the Brennan Monorail, a train from the early 1900s that seemed to defy the laws of physics. Not only did it keep itself perfectly balanced on a single rail, but it mysteriously leaned into corners without any driver input.

It’s kind of incredible how well Brennan’s system worked. It’s ingenious. (via messy nessy)

Tags: engineering · inventions · Louis Brennan · physics · science · trains · video

Read the whole story
mrmarchant
14 minutes ago
reply
Share this story
Delete

So What if They Have My Data?

1 Share
If you’d rather listen than read, I recorded an audio version of this essay for paid subscribers at the end of the post. Thank you for being here!

Sometime in the mid-2000s, most of us started handing over pieces of ourselves to the internet without giving the exchange a second thought. We created email accounts, signed up for social media, bought things online, downloaded apps, swiped loyalty cards, connected fitness trackers, stored photos in the cloud, and agreed to terms of service that almost none of us have ever read in full. We did this thousands of times over two decades and counting, and each interaction felt small enough to be inconsequential.

But the accumulation is enormous. More than 6 billion people now use the internet, and each one makes an estimated 5,000 digital interactions per day. Most of those interactions happen without our conscious awareness: a GPS ping, a page load, an app opening, a browser cookie refreshing, a device checking in with a cell tower. The average person in 2010 made an estimated 298 digital interactions per day. In fifteen years, that number multiplied more than sixteenfold. Those digital interactions produce records that can persist indefinitely, stored, copied, indexed, bought, sold, and combined with other records to build profiles of extraordinary detail.

If we’ve been online since the late 1990s or early 2000s, our data footprint can include social media accounts we’ve created, online purchases we’ve made, forums we’ve posted in, loyalty cards we’ve used, and apps we’ve installed going back decades. Some of that information lives on platforms we’ve long forgotten. Some of it was collected by companies that have since been acquired or dissolved, with our data potentially passing to successor entities we’ve never heard of. The digital life most of us have been living for 15 to 25 years has produced a layered, evolving archive that only grows more valuable to the people who buy and sell it as time goes on.

Most of us sense that something is off about all of this. In a 2023 survey, Pew Research found that roughly eight in ten Americans feel they have little to no control over the data companies collect about them, 71% are concerned about government data use, and 67% say they understand little to nothing about what companies are doing with their personal information. The concern is real and widespread. And so is the feeling of helplessness: 60% of Americans believe it’s impossible to go through daily life without having their data tracked. The unease is there. What’s missing is a clear picture of what’s happening on the other side of the transaction.


Librarians don’t just help you find information. We help you know what to do with it once you have it. Card Catalog applies that same expertise to the age of AI and information overload. Join 20K+ readers here ↓


What “My Data” Looks Like as a File

When we say “my data,” we tend to picture the things we’ve actively typed into a form: our name, email address, maybe a credit card number. But the scope of what companies collect and brokers sell extends far beyond what we’ve consciously shared. The majority of our data profile is generated not from what we’ve entered but from what we’ve done: where we’ve gone, what we’ve browsed, how long we’ve lingered, what we’ve bought, who we’ve contacted, and what patterns emerge when all of those behaviors are tracked over time.

Beyond the identifying details we'd expect (name, date of birth, government IDs), the categories of personal information being collected, sold, and traded include, among others:

  • Financial records: credit history, transaction logs, bank account activity, loan applications

  • Location data: GPS coordinates from our phones, Wi-Fi connection logs, cell tower pings, the places we’ve checked in and the routes we’ve traveled

  • Behavioral data: the sites we’ve browsed, the searches we’ve run, the products we’ve lingered on, the apps we’ve opened and how long we’ve spent inside them

  • Biometric data: fingerprints, facial recognition templates, voiceprints

  • Communications metadata: who we’ve contacted, when, how often, and from where, even when the content of the message itself isn’t captured

  • Health-related data: pharmacy purchases, fitness tracker output, symptom searches, insurance claims

  • Social data: our contacts, our connections, our group memberships, who we interact with and how frequently

These categories don’t exist in isolation. For example, a pharmacy purchase is one data point on its own. Combined with a location trail, a search history, and a social media profile, it becomes part of a behavioral mosaic that can be used to infer things we never disclosed: our health conditions, our financial stability, our family status, our political orientation even. Acxiom, one of the largest data brokers, advertises more than 10,000 unique data attributes in its consumer profiles. The profile that results from all of this collection isn’t a list of facts we shared. It’s a composite portrait assembled from fragments that we never intended to be read together.

Who Has It

They have my data” implies a single entity of ownership, but the data ecosystem is a layered network and each layer operates differently. The platform we signed up for, the trackers running behind the webpage we visited, the broker who bought our behavioral profile, and the insurance company that used it to adjust our premium are all separate organizations with separate business models. Understanding who “they” are requires seeing how these layers connect.

The most visible layer is the platforms and services we interact with directly: Google, Apple, Meta, Amazon, Microsoft, and hundreds of smaller apps and websites. These companies collect data as a condition of use. Google processes billions of searches per day (estimates range from 8.5 to 14 billion depending on the source and methodology). Facebook’s engineering team disclosed in 2014 that its data warehouse alone was ingesting about 600 terabytes of new data per day, and the volume has grown substantially since. Every interaction on these platforms produces records that are stored, indexed, and fed into behavioral models.

Behind those platforms sits an advertising and tracking infrastructure that most of us never see. When we load a webpage, dozens of third-party trackers can fire simultaneously, each one logging our browser type, our device, our location, our referring page, and our behavior on the site. A single visit to a news article or product page can involve data transmissions to 50 or more companies we’ve never interacted with directly. These ad networks and analytics firms build cross-platform behavioral profiles that follow us from device to device, assembling a picture of our habits that no single app or website could construct alone.

And finally, there are the end buyers: insurance companies, financial institutions, employers, landlords, retailers, political consultants, government agencies, and AI companies. These are the organizations that purchase brokered data and use it to make decisions that affect our lives directly, from the interest rate we’re offered on a loan to the price we’re shown for a pair of shoes to whether our rental application gets approved. The distance between the data we generated and the decision it informs can be vast, and the connection between the two is almost never disclosed.

Where It Lives and How It Moves

Our data doesn’t sit in one place waiting to be accessed. It’s distributed across more than 4,000 data centers in the United States alone, operated by technology companies, cloud providers, brokers, and government agencies. Cloud storage worldwide is projected to exceed 200 zettabytes in the coming years, a figure that translates to more than 200 trillion gigabytes.

But the numbers matter less than the mechanics. What makes the data economy so difficult to see is the way information flows between layers. A common path looks something like this: a fitness app collects our running routes and resting heart rate. The app’s developer shares that data with an analytics partner. The analytics partner sells aggregated behavioral data to a data broker. The broker combines it with our purchase history, our location patterns, and our credit profile, then sells the resulting bundle to an insurance company, a financial institution, or a marketing firm. At no point in this chain did we interact with anyone beyond the fitness app. We agreed to the app’s terms of service, which included a clause about sharing data with “third-party partners,” and the rest of the chain followed from there.

At each stage, the data is processed: cleaned, categorized, cross-referenced with other datasets, scored, and segmented. Algorithms organize us into consumer profiles based on inferred income, predicted purchasing behavior, estimated health risk, political leanings, and thousands of other variables. The processing is what transforms raw data into something commercially valuable, and it happens largely outside our awareness.

What They’re Using It For

Some of the ways our data gets used are familiar, and some of them are useful in ways we can feel. When a streaming platform recommends a film based on what we’ve watched before, or a search engine surfaces local results because it knows our location, or a retailer suggests a product similar to one we recently bought, what we’re seeing data-driven personalization working as intended. Most of us have experienced moments where targeted content saved us time or introduced us to something we wouldn’t have found otherwise. The advertising model also funds a tremendous amount of the free content and services we use every day, from search engines to email to social media to news.

Insurance companies purchase lifestyle and behavioral data to assess risk and adjust premiums without ever asking us directly. If a data profile suggests certain health patterns, certain driving routes, or a certain kind of neighborhood, those details can shape what we’re offered and what we’re charged. The data acts as a proxy questionnaire we never filled out.

Financial institutions use brokered data for credit decisions, loan eligibility, and fraud detection. Some of that serves a protective function. But the algorithms doing this work are proprietary and opaque, and studies have documented that algorithmic credit scoring models produce systematically lower scores for Black and Latino communities compared to white and Asian populations. Consumers do have legal rights under the Fair Credit Reporting Act (FCRA) to dispute inaccuracies, and lenders are required to provide adverse action notices explaining the principal reasons for a denial. But consumers often don’t know which brokered data fed into the score in the first place, and companies aren’t required to reveal the proprietary formulas their models use. The Consumer Financial Protection Bureau (CFPB) has noted that data brokers operating in a gray area between regulated credit reporting and unregulated data sales make the process of identifying and challenging errors especially difficult in practice.

Employers and landlords use data from people-search sites to screen applicants. These sites source their information from data brokers and public records, and they frequently contain errors, because brokers don’t verify what they aggregate. The Federal Trade Commission and organizations like the Electronic Privacy Information Center have documented cases where inaccurate broker data cost people jobs and housing.

Retailers use personal data to set individualized prices. In January 2025, the Federal Trade Commission released findings from a study of what it calls “surveillance pricing,” in which intermediary firms hired by retailers track browser history, mouse movements, purchase patterns, and location to adjust the price of the same product for different buyers. The FTC described a scenario in which a consumer profiled as a new parent would be shown higher-priced baby products at the top of their search results. New York became the first state to require retailers to disclose when a price was set by an algorithm using the consumer’s personal data; the law was signed in May 2025 and took effect in November of that year after surviving a legal challenge. Also in 2025, dozens of state legislatures introduced bills to regulate various forms of algorithmic pricing, including surveillance pricing and algorithmic rent-setting.

AI companies are training their models on personal data scraped from the web. Web crawlers pull content from blogs, social media profiles, online marketplaces, photo-sharing platforms, and anywhere else that isn’t behind a login wall. A 2025 MIT Technology Review investigation of a major AI training dataset found thousands of identifiable faces, identity documents, and job applications in a sample representing just 0.1% of the data, and estimated the full set contained hundreds of millions of images with personal information. Meta has said its AI models are partially trained on public Facebook and Instagram posts. LinkedIn began using member data to train its AI tools before updating its terms of service to reflect that change. Because AI companies have scraped content posted long before generative AI existed, the people whose data is in those training sets never had the opportunity to consent to that use. And there’s a feedback loop: we generate data by using AI systems, that data refines those systems, and those systems shape the information environment we navigate. Our data becomes part of the infrastructure that determines what information reaches us next.

Government agencies buy personal data from the same commercial brokers that serve advertisers and insurance companies. The Supreme Court ruled in 2018 that law enforcement needs a warrant to access a person’s historical cell phone location data. But federal agencies, including the FBI, ICE, and the Department of Defense, have argued that purchasing location data from a commercial broker is a market transaction, not a compelled disclosure, and therefore doesn’t require one. The Brennan Center for Justice has called this a loophole that allows agencies to bypass constitutional protections, and has documented cases including the Department of Defense purchasing location data collected from prayer apps to monitor Muslim communities. The same data pipeline built for advertising can be repurposed for surveillance, and the legal framework hasn’t caught up to that reality.

And the brokers themselves get breached. In July 2025, hackers accessed names, Social Security numbers, and dates of birth for more than 4.4 million people through a third-party application used by TransUnion, one of the three major U.S. credit bureaus. In a separate incident disclosed that same year, LexisNexis Risk Solutions confirmed that hackers accessed names, Social Security numbers, driver’s license numbers, and dates of birth for more than 364,000 people through a third-party software development platform. The companies that centralize our data become single points of failure, and when they’re compromised, the exposure isn’t one transaction or one relationship. It’s a cross-section of an entire life in one place.


Card Catalog teaches information literacy for the AI age: how to evaluate what you’re reading and how to process what you find. Learn how to stay informed without the overwhelm. Join 20K+ readers here ↓


Data We Never Shared

Not all personal data comes from something we’ve handed over. Companies also generate what’s known as inferred or derived data: new information produced by running existing records through predictive algorithms. The inference is drawn from the pattern, not from anything we volunteered. In 2012, a New York Times feature revealed that Target had built a pregnancy prediction algorithm around roughly 25 products whose purchase patterns correlated with pregnancy stages: unscented lotion, certain vitamin supplements, extra-large bags of cotton balls. The algorithm could estimate a due date within a narrow window based entirely on shopping behavior.

Insurance companies can infer health risks from purchase histories and location patterns. Financial institutions can infer economic instability from app usage and transaction frequency. Data brokers categorize consumers into segments like “single parents,” “fitness enthusiasts,” and “budget conscious households” based on behavioral inferences, not declared preferences. These inferred profiles are then sold to the same range of buyers as declared data, with the same range of consequences, but we never agreed to share the information those profiles contain - the information didn’t exist until a model generated it from our behavior.

This means that even careful, privacy-conscious choices about what to share can be partially circumvented by inference. We might choose not to disclose a health condition, a pregnancy, a financial difficulty, or a political affiliation, and a predictive model can generate a probability estimate of that very thing based on the patterns in the data we did share. The profile that follows us through the data economy isn’t limited to what we put into the system. It includes what automated models have inferred from patterns in our behavior, patterns that become labeled, scored, and treated as facts by the companies that buy them.

Why the Context Is the Problem

The philosopher Helen Nissenbaum has a framework for what’s happening here: contextual integrity. The idea is that privacy isn’t about secrecy. We share information willingly all the time, when the context fits. We tell our doctor about a health condition because we expect that information to stay within the medical relationship. We search for symptoms on a health website because we assume that search won’t follow us into an insurance application. In the current data economy, that’s exactly the kind of boundary that dissolves, because the company collecting the data and the company buying it are operating in completely different contexts.

This is an information literacy problem as much as a privacy problem. Information literacy is usually framed around consumption: evaluating sources, questioning claims, recognizing bias in what we read and watch. But every time we interact with a digital service, we’re also producing information: generating a record that will be read, interpreted, scored, and acted on by organizations we may never interact with directly. Many of us have gotten better at questioning the information that comes at us: checking sources, noticing bias, and recognizing when something is trying to sell us a conclusion. But we haven’t developed equivalent habits around the information that flows from us: where it goes after we hand it over, who reads the record, what incentives they have, and what conclusions they draw. The gap between what we think we’re consenting to and what we’ve agreed to in practice is where the real exposure lives, and the system is designed to keep that gap invisible.

Photo by Neon Wang on Unsplash

The Illusion of Choice

One of the reasons the “so what” question is hard to answer with action is that opting out of data collection often means opting out of participation. Declining a social media platform’s terms of service means not using the platform. Refusing location permissions can mean losing access to navigation, ride-sharing, weather, and delivery apps. Choosing not to create an account can mean paying more, seeing less, or being locked out of services that have become essential infrastructure for work, communication, healthcare, banking, and education.

The architecture of digital consent treats data sharing as a binary: agree to the terms or don’t use the product. There’s rarely a middle option that allows us to use a service while limiting what data gets collected and where it goes. The result is that the “choice” to share data often functions as a condition of entry into daily life rather than an informed negotiation. We’re not handing over data because we’ve weighed the tradeoff and decided it’s fair. We’re handing it over because the alternative is exclusion from services we rely on.

This is the structural context behind the Pew Research Center finding that more than half of Americans believe it’s impossible to go through daily life without being tracked. For many of us, it isn’t possible, at least not without significant inconvenience or sacrifice. The question isn’t whether we can avoid data collection entirely, because for the vast majority of people who participate in modern life, the answer is no. The question is whether we can make more informed decisions within the constraints we’re operating in, and whether the system can be pushed - through regulation, through market pressure, through better tools - toward something more transparent.

Can We Get It Back?

California’s Delete Act, which took effect in January 2026, is the strongest example of what’s emerging. It created a platform called DROP (Delete Request and Opt-Out Platform) that lets California residents submit a single deletion request to every registered data broker in the state. Brokers are required to process those requests, maintain suppression lists to prevent re-collection, and check the platform regularly for new requests. The European Union’s GDPR provides similar individual rights, and a handful of other U.S. states have enacted their own privacy laws with varying levels of protection. But the coverage is uneven: what’s available to a California or EU resident may not extend to someone in a state without comparable legislation.

Some services now automate parts of the opt-out process, submitting removal requests to dozens of brokers on our behalf. These can’t erase the data trail entirely, but they can narrow what’s actively available for sale.

Beyond deletion, there are smaller choices that reduce how much new data we generate. We can audit which apps have permission to track our location or access our contacts, since a surprising amount of behavioral data comes from apps that don’t need those permissions to function. We can treat “sign in with Google” and “sign in with Facebook” buttons as what they are: data-sharing agreements that can link a new service to an existing profile. And we can glance at the first few lines of a privacy policy before agreeing, looking for some version of “we may share your information with our partners,” where “partners” just means anyone willing to pay.

So can we get it back? Not entirely. Data that’s already been collected, copied, sold, and processed across multiple systems can’t be fully recalled. What we can do is reduce what’s actively available for sale, slow the flow of new data going forward, and take advantage of legal tools that didn’t exist a few years ago. The archive of our past digital lives is too distributed to undo, but the file is still being written, and we have more say over the next page than we did over the last twenty years of them.

The Other Side of the Transaction

Most of us don’t read privacy policies, and the policies aren’t built to be read. They average thousands of words of dense legal language filled with terms like “legitimate interest,” “data processor,” and “de-identified data.” Studies consistently put them at a late high school to early college reading level (grade 12 to 14), but the difficulty goes beyond reading level: the concepts are abstract, the volume of agreements we encounter is enormous, and the design of the consent process itself pushes us through as fast as possible. Pre-checked boxes, auto-scrolling agreement windows, “accept all” buttons positioned prominently while “customize settings” options sit behind additional clicks. These are dark patterns, design choices that make the path of least resistance the path of maximum data sharing.

The result is a gap between the moment we share a piece of information and the moment that information shapes a decision about our lives. We don’t connect the app to the insurance premium or the loyalty card to the rental application because the chain of custody between them is long, complex, and designed to stay out of view.

The same critical thinking we’ve learned to apply to the information flowing toward us (checking sources, questioning claims, looking for bias) applies to the information flowing from us: who’s collecting this, what will they do with it, who else will see it, and what did we agree to? The difference is that in the data economy, we’re the product being evaluated, and the questions are being asked about us rather than by us.

So what if they have our data? The tradeoff extends well beyond better ads. It reaches into the prices we’re charged, the credit we’re offered, the jobs we’re considered for, the insurance premiums we pay, the AI systems trained on our behavior, the accuracy of the profiles used to make decisions about our lives, and the degree to which government agencies can monitor our movements without a warrant. Every new service we sign up for, every permission we grant, and every terms-of-service agreement we accept adds another layer to that file. We can’t close the file entirely, but we can make more informed decisions about what goes into it next.


The free essays are the foundation. The paid tier is the applied toolkit: biweekly AI briefings, monthly subscriber-driven research, and quarterly guides that give you real skills you can use immediately, plus a growing framework library (and classes coming soon). Upgrade to paid if you want the full Card Catalog.


Have you read the Founding Member Report: The State of AI yet?
A comprehensive guide for information navigators who want to understand where AI is actually heading and what it means for how we find, evaluate, and use information in 2026.
Find out more here.

Share Card Catalog


Prefer to listen? My audio narration of this essay is available to paid subscribers below.
Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

I truly hate mostpeopleslop

1 Share
I truly hate mostpeopleslop

In 2006, Joe Sugarman published a book called The Adweek Copywriting Handbook - and an axiom stuck...

"The sole purpose of the first sentence in an advertisement is to get you to read the second sentence."

That line, more or less, explains how social media turned into a pile of shit.

Sugarman's advice became the core system prompt for 300,000 tech assholes on Twitter. They've run it through algorithm after algorithm and produced the most soul destroying rhetorical tic of the 2020s. I'm talking about "Mostpeopleslop." "Most founders don't know this yet." "Most people aren't paying attention to this." "Most founders skip [thing my startup sells] because [bad reason]." "Most founders treat [normal activity] like [wrong version of activity]." "Most founders say they want [thing]. Few actually [thing] well." "Most founders confuse [vague concept A] with [vague concept B]." You've seen it, you've scrolled past it, and you've maybe even liked one or two of these excretions before your brain caught up to your thumb, because it's bloody everywhere. It breeds in the dark spaces between LinkedIn notifications, it has colonized every timeline on every platform where a man with a podcast and a Calendly link can post for free, and I hate it. May God forgive me, I hate it.

Why it works (and why that's the problem)

I'll give the format its due: it works // performs. And the reason why is simple. "Most people" is a tribal signal - when you read "most people don't know about this," your brain does a quick calculation: Am I most people? Do I want to be most people? No? Then I better keep reading, so I can be the Holy Exception. But you're not actually learning fucking anything. You're being told you're special for having stopped to read, and the poster is offering you membership in an in-group, and the price of admission is a like, a retweet, any scrap of engagement. It's a scarcity play - people pay more attention to shit that feels exclusive.

"Most people don't know this" is exactly that.

It comes in a few different flavours...

The Reframe Artist goes "Most people are treating [recent tech acquisition] as a media story. It's a distribution story." This guy read one Ben Thompson article in 2019 and has been repackaging the word "distribution" as a personality trait ever since. The point underneath might even be fine! But he can't say it straight.

The Trojan Horse is "Most founders skip analytics because setup is painful. [My startup] is native. Zero setup." These are just ads. They are indistinguishable from late-night infomercials. "Are YOU tired of [thing]? Most founders are! But wait, there's more and if you follow and reply CRAP now, you get a set of steak knives..."

The Self-Eating Snake: "Most founders treat building in public like a highlight reel. They're doing it wrong. 7 ways to build in public without being cringe." Followed by a numbered list that packages a real idea in the same exact format it claims to be critiquing.

The Fortune Cookie: "Most founders confuse motivation with desperation." "Most founders mistake speed for progress." These sound wise if you scroll past them fast enough. They're fortune cookies, and they get engagement because they're perfect for screenshotting into your Instagram story, but there's nothing actually there...

And the Parasite: some guy quote-tweets "What keeps you moving? Progress or Pressure?" and adds "Most founders confuse which one they're running on." You take someone else's thought, bolt on the "most founders" frame, and now you've "created content." The confidence-to-effort ratio should embarrass anyone. It's intellectual house-flipping, with all the integrity attached.

The content industrial complex

Mostpeopleslop has metastasized because Twitter started rewarding engagement bait at the same time the creator economy started demanding you post all day // every day. If you're a tech influencer in 2026, you probably post 10 to 20 times a day, maybe more - this is what the gurus tell you to do. You need formats you can crank out fast that reliably get impressions, and "most people" threads do exactly that. There's no research required, and no original data - you barely need an opinion. You could generate these in your sleep, and thanks to OpenClaw some of these guys clearly do...

The easiest content to produce is the content that mimics existing successful content. The "most people" format is the shallow work of tech Twitter. It looks like thought leadership. It reads like wisdom. It's still slop.

The result is a timeline full of people telling you what "most people" get wrong, while they all say roughly the same things, in roughly the same format, to the same audience with a near-uniform contrarianism. Everyone is standing on a soapbox yelling "wake up, sheeple" at a crowd of other people on soapboxes.

The aesthetic crime of reading the same tweet structure 40 times a day isn't even the worst part - it's that mostpeopleslop degrades the information environment. When every piece of advice is framed as something "most people" don't know, you lose the ability to distinguish between underappreciated ideas and stuff someone repackaged from a blog post they read that morning...

And it trains audiences to value framing over substance - if you read enough "most people" posts, you start evaluating ideas based on how they're packaged rather than whether they're true. A well-formatted "most people" thread with a mediocre idea will outperform a useful post that doesn't use the formula, and so yes the medium becomes the message, but the message is: style points matter more than being right or even being valuable in the first place.

Everyone is an insider and an outsider at the same time; you're an insider because you're reading this post, you're an outsider because "most people" haven't figured this out yet, but since everyone is reading these posts, everyone is an insider, which means the distinction is fictional and we seem to have a collective hallucination of exclusivity.

The incentive structure on Twitter (and LinkedIn, where this format is somehow even more prevalent) rewards this kind of posting. If you're building an audience to sell a course, a SaaS product, a consulting practice, or a $249/month community where you teach other people to build audiences to sell courses, you need impressions, and you need followers, and mostpeopleslop delivers both. The people posting this stuff aren't stupid; some of them (a select // rare few, I'll grant) are sharp, have real experience, and could write things worth reading, but the format is a trap. Once you see that it performs, you keep using it, and every time you use it, you get a little further from saying something real and a little closer to being a content-generation machine optimized for engagement metrics. You have become the slop.

I want people to say the thing. If you have an observation about distribution, share the observation. If you built a product that solves a problem, describe the problem and describe the solution and have done with it. You don't need to frame every single post as a correction of what "most people" believe, and you don't need to position yourself as the lone voice of reason in a sea of ignorance. You can just ~say the thing.

The best writers and thinkers in tech have never needed the "most people" crutch. You can be interesting without being condescending. You can build an audience by being useful rather than by manufacturing a false sense of exclusivity 280 characters at a time.

But most people don't know that yet. (Sorry. Had to.)

Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

Your Backpack Got Worse On Purpose . “From a...

1 Share

Your Backpack Got Worse On Purpose. “From a shareholder’s perspective, the bag that falls apart is the better product. That’s the business model. Repeat failure, repeat purchase, repeat revenue. The quality decline isn’t a side effect. It’s the strategy.”

Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete

How to walk through walls

1 Share

In March 1991, Robert Rodriguez, then 22 years old, decided to write and shoot three feature-length home movies to gain experience making full-length films, in case he ever received an offer to direct a real one.

Nine months later, having finished El Mariachi, the first part of his planned trilogy, Rodriguez found himself in the office of Robert Newman, a Hollywood agent. Watching the trailer Rodriguez had cut, Newman, who would go on to sell the movie to Columbia in a deal worth $1.8 million, asked:

“How much did it cost [to make] again?”

“$7,000.”

“Really? That’s pretty good . . . most trailers usually cost between $20,000 and $30,000.”

“No,” Rodriguez said, “the whole movie cost $7,000.”

In nine months, he had written, directed, and sold a 90-minute action film that cost a third of what a film trailer would. How was that possible? At the time, the cost of film stock alone would normally run into several hundred thousand dollars for an action film like El Mariachi.

During the press tour, the journalists thought the story about the $7,000 was too outlandish to be true, and Rodriguez had to show them a behind-the-scenes video to convince them. In that video they could see that the reason Rodriguez, the son of two Mexican immigrants with ten children in San Antonio, had been able to make a commercial big-screen action film from his private savings was that he had a hacker mindset.

Hacker mindset

I learned the term hacker mindset from Gwern, a pseudonymous blogger, who wrote about how people like Rodriguez think in his 2012 essay “On Seeing Through and Unseeing.”

To explain the hacker mindset, one example Gwern uses is people who set world records in video games, doing so-called speedruns.

Unlike normal sports, where the athletes are usually at best twice as fast as a healthy adult, video game speedruns can be so insanely much faster than a normal playthrough that a normal person can’t even understand what is happening on the screen. The last game I played was Legend of Zelda: Ocarina of Time, in my parents’ house in the late nineties. I put in some 30 hours, and if I remember correctly, I didn’t even finish it—but when I look at the current speed run record, I see that Bloobiebla & MrGrunz have finished the game in 20 minutes and 9 seconds. I can’t wrap my head around how that is possible.

And I don’t get much wiser when I look at the recording and notice that a substantial part of it is them running backwards with a hen on their head. If we want crazy outcomes, I guess we have to accept crazy behavior.

The difference between Bloobiebla & MrGrunz and me is not, primarily, that they are faster than I. It is, Gwern points out, that they see the game differently. When I played Zelda, I saw “villages” and “hen.” But they have a hacker mindset, so they know that there aren’t actually any villages and hens.

The game, Gwern writes, just “pretends to be made out of things like ‘walls’ and ‘speed limits’ and ‘levels which must be completed in a particular order.’” But what it actually is, at a deeper level, is bits, code, memory locations, processing units, and so on and so forth.

And because they see the game at this level—and understand how it’s put together—Bloobiebla & MrGrunz can make moves in the game that I couldn’t, such as “deliberately overloading the RAM to cause memory allocation errors” (perhaps this was what running backwards with a hen on their head did) which, Gwern writes, can “give you infinite ‘velocity’ or shift you into alternate coordinate systems in the true physics, allowing enormous movements in the supposed map, giving shortcuts to the ‘end’ of the game.”

And lo and behold, soon after Bloobiebla & MrGrunz drop the hen, they fall through a “wall” and land in the final “level.”

Because I’m watching the abstraction that the game is pretending to be—a cute fantasy world with villages and swords and horses—this looks bizarre to me. But it makes perfect sense when you understand what the game is at a deeper level.

Most systems can be viewed at multiple levels. There is a superficial system which pretends to be made of one thing (walls, hens). But actually, it is really made of something else (bits, memory allocations). And if you learn to understand that underlying system, you can find ways to use the lower-level details to steer the system in a way that looks incomprehensible to those who only see the more superficial system.

Robert Rodriguez’s classmates must have experienced a bewilderment of this kind when they saw him go down to Mexico with $7,000 dollars and return with a film showing in cinemas across the US. That was not a move that was part of how they’d been told the game that is the film industry works; but it was a move that was perfectly compatible with the facts of cameras, lights, and Hollywood deal-making if you understood them at a deep enough level.

Rodriguez could speedrun a film career, walking through proverbial walls, because he saw through the game to its underlying mechanics. He had the hacker mindset. He was willing to get his hands dirty and learn the practical realities. He saw that a lot of what the other film students took for reality were just fictions they’d been taught at school.

This is a bit vague. Let me give some concrete examples of ways he saw through the system his classmates took for reality.

Subscribe now

At film school, they were taught to work with a crew, where someone specialized as a cameraman, another as a sound technician, and so on. But Rodriguez, who had always done movies on his own as a kid, knew that that was just a convention. If he could figure it out, it would be more effective to do all of the technical work himself, which also meant he wouldn’t have to pay for a crew.

This seemed insane to others. On July 24, 1991, three days before leaving for Mexico, his film teacher asked him who would be the director of photography.

From Rodriguez’s diary, published as Rebel Without a Crew:

I know he’ll shit all over me if I tell him the truth—that I’m planning on shooting it all by myself, without a crew. So I told him, “I’m going to be the director of photography, but I’ll probably have a small crew around to help out.” He shook his head. “No, no, no, no … You’re going to fail! Your actors are going to hate you. They’re going to be sitting there waiting for you while you light the set. Don’t be an idiot. Get a director of photography.”

Instead, Rodriguez bought 250-watt bulbs that he screwed into the existing lamps on the sets, and that was that for lighting.

At film school, they had also been taught to shoot several takes from multiple angles so the editor could shape the scene, but since Rodriguez was the editor, he could visualize exactly what the scenes would be, shooting only precisely what was needed, a single take per scene, which minimized the cost of film stock and editing time.

A thousand small optimizations like this meant he could shoot the film in ten days while staying with the lead actor’s mom in Ciudad Acuña during summer break.

More examples

There are similar shortcuts in most domains if you learn to see through the abstractions and unsee the conventional ways of viewing something. There are deeper levels to most systems if you are willing to take them apart.

For example, when I grew up, I was told that there was such a thing as a “job,” and these were listed on “job boards” where you could read about the “qualifications” necessary—qualifications that you got through something called “education.” This isn’t false. You can play the game this way. But it is a very superficial way of playing the game.

A slightly more precise reading is to say that the economy is made up of 8.25 billion people, all trying to solve various problems—and what “getting a job” really means is finding a person with a problem and convincing them that you can solve it for them. This can be done by looking at job boards, of course, where people who collaborate on solving problems (aka work at companies) list some of the problems they want help solving.

Now, you notice more things you could do. You could, for example, go talk to people directly and convince them that you can solve their problems. Or, you could work in public, sharing projects you are building on the internet or elsewhere, so people can see what you do and reach out. If you want to work at a specific company, you could talk directly to the employees you’d want to work with, understand their current problems, and then solve their problems for free, so they lobby their superiors to employ you. People who operate at this level tend to have more interesting careers than those who play the game by looking at job postings.

Another everyday example of the hacker mindset can be seen when agentic people deal with bureaucracy.

Companies and governments like to pretend to be formal, machine-like systems, where things have to be done in a specific way. But this, just as in the case of the video game pretending to be made of levels, is an abstraction. A fiction. Actually, a bureaucracy is just people and some file systems. Calling and asking to speak to a supervisor, or showing up in person, or finding the specific person who handles your case, often lets you bypass the “system.” If you search Patrick MacKenzie’s tweets, you get a long stream of great examples of him doing this—getting the customer service agent at an airline to buy him a ticket from their competitor when his flight was delayed, for instance, or calling pharmacies to create an inventory of the US vaccine stock as he did together with a group of volunteers when the US government failed to keep track of the vaccine stocks during Covid.

How do people develop a hacker mindset?

One thing I personally find useful is to read about people who have it and notice what they do.1 This has helped me see possibilities that I had been blind to because I had a too superficial reading of the system.

It also helps if you can surround yourself with people who have a hacker mindset. I suspect this kind of cultural osmosis played an important role for Rodriguez. His dad was a self-employed salesman and was always trying new things—they seem to have been a family that encouraged taking machines apart and doing stuff yourself. Rodriguez also had a great boss at his first job:

My first job in high school was at a photo lab and I remember what my first boss, Mr. Riojas, told me one day after he saw some of my cartoons and photographs. He said that I had creative talent, but what I really needed to do if I wanted to be successful was to become technical. He said that just about anyone can become technical, but not everyone can be creative. And there are a lot of creative people who never get anywhere because they don’t have technical skills. Part of what makes a person creative is his lack of emphasis on things technical.

My boss said that if you are someone who is already creative, and then you become technical, then you are unstoppable.

This is another common pattern among people who have a hacker mindset. They have gotten their hands dirty playing around with the technical parts, insisting on understanding every aspect of the work, “weaving [the] system into [their] mind[s] so tight that it’s hard to find the stitches after a while,” as Alice Maz writes about her experience becoming incomprehensibly good at Minecraft.

When Rodriguez made his first feature film at 23, he had already spent a decade making home videos, editing them by using two VCRs, so he could play the raw material on one and record the bits he wanted on the other. By working hands-on, guided by his own needs, he had learned the details of the work and how things could be manipulated in such a way that his films looked good even if he had no crew or budget.

In an appendix to the diary, he writes:

The most important and useful thing you need to be a filmmaker is “experience in movies,” as opposed to “movie experience.” There’s a difference. They always tell you in film school and in Hollywood that in order to be a filmmaker you need to get “movie experience” so you can work your way up in the business. The reasoning being that by working on other films, even as a production assistant, you get to see firsthand how others make movies. Now, that’s exactly the kind of experience you don’t need. You don’t want to learn how other people make movies especially real Hollywood movies, because nine times out of ten their methods are wasteful and inefficient. You don’t need to learn that!

“Experience in movies,” on the other hand is where you yourself get a borrowed video camera or other recording device and record images then manipulate those images in some kind of editing atmosphere. Whether you use old ¾” video editing systems, VCR to VCR, or even computer editing. Whatever you can get your hands on. The idea is to experience creating your own images and/or stories no matter how crude they are and then manipulating them through editing.

That is, you want to avoid learning the conventional wisdom about how something works—which is always simplified and filled with false walls—and instead focus on getting into very close contact with the actual nuts and bolts by doing everything yourself. That is how you will learn to understand the system well enough to “see through” it.

It might sound like a depressing conclusion to this essay: the way to find shortcuts is to first spend ten years learning all of the technical details.

But it is not depressing.

What we’re talking about here isn’t like going to school—it emphatically is not that—suffering through all of the boring prerequisites before you get to do the exciting parts. What we’re talking about is actually doing the fun stuff, playing around with projects that excite you, trusting that you can learn enough to solve your problems. If you keep tinkering, doing one fun project after another, you will eventually see through the system.

Also, it is only the first time that it might take years. After you’ve developed a hacker mindset in one area of your life, it is much easier to see the rest of reality in the same way.

Having seen through the superficiality and clumsiness of the normal way of doing things, you are less likely to trust conventional wisdom going forward and more likely to trust your eyes. You know that there are deeper layers to reality and have a sense for how to access them.

And then all of reality becomes something you can horse around with.


This essay—like all my free essays—was entirely funded by the contributions of paid subscribers. If you enjoyed it, give them your thanks, and if you can afford, consider joining them:

Drafts of this essay were discussed with Johanna Karlsson. The copy edits were done by Esha Rana. Any remaining mistakes are mine.

1

Rodriguez’ diary, Rebel Without a Crew, is one example. “The Story of VaccinateCA” is another. I also like “Playing to Win” by Alice Maz. A Guide to the Perplexed with Werner Herzog. Surely, you're joking Mr. Feynman. Robert Caro’s The Years of Lyndon Johnson. Some of these examples are more unethical and problematic than others, so beware. If you lack ethics, hacker mindset can be used in manipulative and anti-social ways. And that’s a sad way to live.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

An interactive explainer on the physics of GPS ....

1 Share

An interactive explainer on the physics of GPS. “The answer is in some ways simpler than you’d expect, and in other ways more complex. GPS is fundamentally a translation tool: it converts time into distance.”

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories