230 stories
·
0 followers

Geometry puzzle

1 Share

The diagram is NOT drawn to scale, so don't put a ruler on line x to measure the distance.  Figure it out in your head using simple logic and math.
Read the whole story
mrmarchant
10 hours ago
reply
Share this story
Delete

Running Pong in 240 Browser Tabs

1 Share

Running Pong in 240 Browser Tabs

Sometimes I'm browsing the web and I open up a bunch of tabs. I never close them, and before I know it I've opened up like 240 tabs in a perfect grid 8x30 grid. This is frustrating, because it's a bunch of wasted screen space. I wanted to put that space to good use - so I ran pong inside it.

Read the full post on my blog!

Here's a raw link, if you need it: https://eieio.games/blog/running-pong-in-240-browser-tabs

Read the whole story
mrmarchant
15 hours ago
reply
Share this story
Delete

Abstractions

1 Share

We claim two things about our profession. Computer science studies information processes, natural and artificial. Computer science is a master of abstraction. To reconcile the two, we say that abstraction is the key that unlocks the complexity of designing and managing information processes.

Where did we get the idea that our field is a master of abstractions? This idea is cosmic background radiation left over from the beginning bangs of our field. For its first four decades, computer science struggled under often blistering criticisms by scientists that the new field was not a science. Science is, they said, a quest to understand nature—and computers are not natural objects. Our field’s pioneers maintained that computers were a major new phenomenon that called for a science to understand them. The most obvious benefit of our field was software that controls and manages very complex systems. This benefit accrued from organizing software into hierarchies of modules, each responsible for a small set of operations on a particular type of digital object. Abstraction became a shorthand for that design principle.

Approximately 25 years ago, the weight of opinion suddenly flipped. Computer science was welcomed at the table of science. The tipping point came because many scientists were collaborating with computer scientists to understand information processes in their fields. Many computer scientists claimed we had earned our seat because of our expertise with abstractions. In fact, we won it because computation became a new way of doing science, and many fields of science discovered they were studying naturally occurring information processes.

In what follows, I argue that abstraction is a by-product of our central purpose, understanding information processes. Our core abstraction is information process, not “abstraction.” Every field of science has a core abstraction—the focus of their concerns. In all but a few cases, the core abstractions in science defy precise definitions and scientists in the same field disagree on their meanings. Two lessons follow. First, computing is not unique in believing it is a master of abstraction. Indeed, this claim never sat well with practitioners in other fields. Math, physics, chemistry, astronomy, biology, linguistics, economics, psychology—they all claim to be masters of abstractions. The second lesson is that all fields have made remarkable advances in technology without clear definition of their core abstraction. They all designed simulations and models to harness the concrete forces behind their abstractions. The profound importance of these lessons was recognized with two 2024 Nobel prizes awarded to computer scientists for protein folding and machine learning.

What Is Abstraction?

Abstraction is a verb: to abstract is to identify the basic principles and laws of a process so that it can be studied without regard to physical implementation; the abstraction can then guide many implementations. Abstraction is also a noun: An abstraction is a mental construct that unifies a set of objects. Objects of an abstraction have their own logic of relations with each other that does not require knowledge of lower-level details. (In computer science, we call this information hiding.)

Abstractions are a power of language. In his book Sapiens, Yuval Noel Harari discusses how, over the millennia, human beings created stories (“fictions”) that united them into communities and gave them causes they were willing to fight for.4 These fictions were abstractions that often endured well beyond their invention. The U.S. constitution, for instance, applies to all its states and has guided billions of people for more than 200 years.

The ability of language to let us create new ideas and coordinate around them also empowers language constructs to refer to themselves. After all, we have numerous ideas about our ideas. We build endlessly complex structures of ideas. We can imagine things that do not exist, such as unicorns, or unrealized futures that we can pursue. Self-reference also generates paradoxes. A famous paradox asks: “Does the set of all those sets that do not contain themselves contain itself?” Self-reference is both a blessing and a curse.

Ways to avoid paradoxes are to stack up abstractions in hierarchies or connect them in networks. An abstraction can be composed of lower-level abstractions but cannot refer to higher level abstractions. In chemistry, for example, amino acids are composed of atoms, but do not depend on proteins arranging the acids in particular sequences. In computing, operating systems are considered layers of software that manage different abstractions such as processes, virtual memories, and files; each depends on lower levels, but not higher levels. Consider three examples illustrating how different fields use their abstractions.

Computer science.  An “abstract data type” represents a class of digital objects and the operations that can be performed on them. This reduces complexity because one algorithm can apply to large classes of objects. The expressions of these abstractions can be compiled into executable code: thus, abstractions can also be active computing utilities and not just descriptions.

Physics.  For physicists, abstraction simplifies complex phenomena and enables models to help understand and predict the behavior of complex systems. Many physics models take the form of differential equations that can be solved on grids by highly parallel computers. For example, the Stokes Equation in computational fluid dynamics specifies airflows around flying aircraft. Other models are simulations that evaluate the interactions between entities over long periods of time. For example, astronomers have successfully simulated galactic collisions by treating galaxies as masses of particles representing stars. Because models make simplifications there is always a trade-off between model complexity and accuracy. The classical core abstraction of physics has been any natural process; in recent decades it expanded to include information processes and computers.

Mathematics.  Abstraction is the business of mathematics. Mathematicians are constantly seeking to identify concepts and structures that transcend physical objects. They seek to express the essential relationships among objects by eliminating irrelevant details. Mathematics is seen as supportive of all scientific fields. In 1931, Bertrand Russell wrote: “Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say.”

Anywhere you see a classification you are looking at a hierarchy of abstractions. Anywhere you see a theory you are looking at an explanation of how a set of abstractions interacts. Highly abstract concepts can be very difficult to learn because they depend on understanding much past history, represented in lower-level abstractions.

Differing Interpretations of the Same Abstractions

It is no surprise that different people have different interpretations about abstractions and thus get into arguments over them. After all, abstractions are mental constructs learned individually. Few abstractions have clear logical definitions as in mathematics or in object-oriented languages. Here are some additional examples showing how different fields approach differences of interpretation of their core abstractions.

Biology.  This is the science studying life. There is, however, no clear definition of life. How do biologists decide if some newly discovered organism is alive? They have agreed on a list of seven criteria for assessing whether an entity is living:

  • Responding to stimuli

  • Growing and developing

  • Reproducing

  • Metabolizing substances into energy

  • Maintaining a stable structure (homeostasis)

  • Structured from cells

  • Adaptability in changing environments

The more of these criteria hold for an organism, the more likely is a biologist to say that life is present.

Artificial intelligence.  Its central abstraction—intelligence—defies precise definition. Various authors have cited one of more of these indicators as signs of intelligence:

  • Passes IQ tests

  • Passes Turing test

  • Pinnacle of a hierarchy of abilities determined by psychologists

  • Speed of learning to adapt to new situations

  • Ability to set and pursue goals

  • Ability to solve problems

However, there is no agreement on whether these are sufficient to cover all situations of apparent intelligence. Julian Togelius has an excellent summary of the many notions of “intelligence” (and “artificial”) currently in play.6 This has not handicapped AI, which has produced a series of amazing technological advances.

Computer science.  Its central concept—information process—defies a precise definition. Among the indicators frequently mentioned are:

  • Dynamically evolving strings of symbols satisfying a grammar

  • Assessment that strings of symbols mean something

  • Mapping symbol patterns to meanings

  • Insights gained from data

  • Fundamental force in the universe

  • Process of encoding a description of an event or idea

  • Process of recovering encrypted data

  • Inverse log of the probability of an event (Shannon)

There is no consensus whether these are sufficient to cover all situations where information is present.

Neuroscience.  Consciousness is a core abstraction. Neuroscientists and medical professionals in general have agreed on a few, imprecise indicators of when someone is conscious.5 Some conscious people may fail all the indicators, and some unconscious people may satisfy some of the indicators. It may be impossible to ever know for sure if someone is conscious or not.

Business.  Innovation is a core abstraction. Business leaders want more innovation. Definitions vary from inventing new ideas, prototyping new ideas, transitioning prototypes into user communities, diffusing into user communities, and adopting new practice in user communities. Each definition is accompanied by a theory of how to generate more innovation. The definitions are sufficiently different that the theories conflict. There is considerable debate on which definition and its theory will lead to the most success.

Conclusion

The accompanying table summarizes the examples above. The “criteria” column indicates whether a field has a consensus on criteria for their core abstraction. The “explanatory” column indicates whether a field’s existing definitions adequately explain all the observable instances of their core abstraction. The “utility” column indicates whether they are concerned with finding applications of technologies enabled by their core abstraction.

Table. 

A few fields and their core abstractions
Field Abstraction Criteria? Explanatory? Utility?
Computing Information No No Yes
Physics Natural phenomena No Yes Yes
Mathematics Math concepts No Yes No
Biology Life Yes Yes Yes
Artificial Intelligence Intelligence No No Yes
Neuroscience Consciousness No Yes Maybe
Business Innovation No No Yes

Thus, it seems that the core abstractions of many fields are imprecise and, with only a few exceptions, the fields have no consensus on criteria to determine if an observation fits their abstraction. How do they manage a successful science without a clear definition of their core abstraction? The answer is that in practice they design systems and processes based on validated hypotheses. The varying interpretations are a problem only to the extent that disagreements generate misunderstanding and confusion.

A good way to bring out the differences of interpretation is to ask people how they assess if a phenomenon before them is an instance of their core abstraction. Thus you could say “Life is an assessment,” “intelligence is an assessment,” and so on. When you put it this way, you invite a conversation about the grounding that supports the assessment. For example, a biologist would ground an assessment that a new organism is alive by showing that enough of the seven criteria are satisfied. In other fields the request for assessment quickly brings out differences of interpretation. In business, for example, where there is no consensus on the indicators of innovation, a person’s assessments reveal which of the competing core abstractions they accept. That, in turn, opens the door for conversations about the value of each abstraction.

There is a big controversy over whether technology is dragging us into abstract worlds with fewer close relationships, fear of intimacy, and interaction limited to exchanges across computer screens. This is a particular problem for young people.3 Smartphones are intended to improve communication and yet users feel more isolated, unseen, unappreciated. Something is clearly missing in our understanding of communication, but we have not yet put our collective finger on it.

Two books may help sort this out. In Power and Influence, Nobel Prize economists Daron Acemoglu and Simon Johnson present a massive trove of data to claim that increasing automation often increases organizational productivity without increasing overall economic progress for everyone. They argue that the abstractions behind automation focus on displacing workers rather than augmenting workers by enabling them to take on more meaningful tasks.1 In How to Know a Person, David Brooks presents communication practices that help you see and appreciate the everyday concrete concerns of others.2

Maybe we need to occasionally descend from the high clouds of our abstractions to the concrete earthy concerns of everyday life.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

You Will Never Win an Argument On the Internet—Here's Why

1 Share
You Will Never Win an Argument On the Internet—Here's Why

The Internet promised us a renaissance of discourse. Armed with instant access to all human knowledge and the ability to connect with brilliant minds worldwide, we imagined our online debates would elevate human understanding to unprecedented heights. 

But two decades later, we scroll through our choice of social poison, watching people with PhDs argue like kindergarteners with a rogues gallery of anime avatars about whether water is wet / turning frogs gay.

What the hell happened?

Conventional wisdom says we still need more good-faith discussion. More steelmanning of opposing views. More rational discourse. But watching the daily devolution of online debates tells me a darker truth: the very act of arguing on the Internet is making us collectively dumber.

This sounds counterintuitive. Debate sharpens the mind, right? Exposure to different viewpoints broadens our perspective. That's what Socrates taught us, what our schools drill into us, what every "how to think better" course preaches. But something fundamental has been broken in discourse, and we need to understand why before it breaks us too.

The Attention Casino

Social media platforms operate like casinos - they're engineered to maximize "engagement" through carefully calibrated reward schedules. But instead of pulling slot machine levers, we're pulling the debate lever over and over, chasing the dopamine hit of being right on the Internet.

And the house always fucking wins. Every platform's algorithm rewards conflict over clarity, dunks over discourse, and pithy dismissals over patient exploration of ideas. A thoughtful thread exploring the nuances of monetary policy might get a few polite likes. A savage quote-tweet demolishing a bad take? That's engagement gold.

We've created a system where being wrong is algorithmically amplified (because it draws corrective responses) while being right is algorithmically ignored (because "good point, I agree" doesn't drive engagement). The result? Our feeds fill with increasingly bad takes begging to be debunked, while genuine insight drowns in an ocean of dunks.

The Psychology of Doubling Down

Remember the last time you changed your mind because of a Twitter argument? 

Neither do I.

And the reason isn't just general stubborn bloody-mindedness. When we argue online, we trigger a perfect storm of psychological factors that make genuine mind-changing nearly impossible:

  1. The Audience Effect - We're not arguing with our opponent; we're performing for the audience. Changing our minds means losing face in front of the crowd.
  2. The Written Record - Unlike verbal debates that fade from memory, online arguments create permanent records. Admitting we're wrong means leaving evidence of our mistakes forever.
  3. The Identity Trap - The longer we argue a position, the more it becomes part of our identity. Changing our minds starts to feel like betraying ourselves.
  4. The Sunk Cost Fallacy - Yep, that old chestnut. After investing hours crafting arguments and collecting sources, abandoning our position feels like wasting all that effort.

The result? We don't engage in debates to learn - we engage to win. And when winning becomes the goal, we automatically activate the mental machinery of rationalization rather than rational thinking.

You Will Never Win an Argument On the Internet—Here's Why

The Dunning-Kruger Death Spiral

The more we engage in online debates, the more confident we become in our knowledge while actually understanding less. Call it the Dunning-Kruger Death Spiral.

It works like this: We rarely engage with the strongest version of opposing views when we argue online. Instead, we battle against simplified, often straw-manned versions that are easier to defeat. Each "victory" reinforces our confidence while degrading our ability to grapple with real complexity.

Think about how we prepare for online debates. We don't deeply read opposing views - we skim for weak points to attack. We don't steelman opposing arguments - we collect gotcha counterexamples. We don't explore nuance - we build arsenals of snappy comebacks.

These habits actively make us worse at understanding complex issues. We train ourselves to look for easy dunks instead of hard truths. We optimize for rhetorical effectiveness over actual understanding. We become increasingly confident while growing increasingly wrong.

The Platform Prison

"What if we just try harder to have good faith discussions? What if we consciously avoid these pitfalls?"

Unfortunately, that's not an option, and it misunderstands how thoroughly platform incentives shape discourse. Even if you enter a discussion with pure intentions, you're still playing in a casino designed to maximize conflict.

Think about:

  • Character limits force complex ideas into oversimplified snippets
  • Threading mechanisms make it easy to miss context and talk past each other
  • Like/retweet mechanics reward zingers over nuance
  • Notification systems interrupt deep thought with constant micro-distractions
  • Algorithmic amplification ensures the most inflammatory takes rise to the top

You can't fix a systemically broken system through individual virtue. It's like trying to play chess while someone keeps randomizing the board positions - the game itself has been fundamentally broken.

What's the solution? Stop the discourse entirely?

Not exactly. But we need to radically shift how we engage with ideas online. 

Here's what works, in my experience:

  1. Deep Reading - Instead of arguing about books, read them. Instead of debating summaries, engage with primary sources. Build genuine understanding before entering discussions.
  2. Writing to Think - Use writing to explore ideas, not win arguments. Write essays, not tweets. Focus on developing your thoughts rather than defeating others'.
  3. Controlled Environments - Save serious discussions for spaces designed for learning rather than engagement. Book clubs, study groups, and moderated forums can create conditions for genuine discourse.
  4. Async Over Real-Time - Real-time debate optimizes for quick responses over deep thought. Async discussions allow time for reflection and reduce performative pressure.

Learning happens in environments optimized for understanding, not winning. 

The moment a space optimizes for conflict over clarity, real learning becomes nearly impossible.

Most online debate is actively harmful to our thinking. Every hour spent arguing on Twitter is an hour we could have spent reading a book, writing an essay, or having a genuine discussion in a better environment. Fuck it; even watching paint dry would have been a more productive/conducive enterprise.

Because the cost isn't just wasted time; it's the slow degradation of our ability to think clearly about complex issues. It's the replacement of nuanced understanding with rhetorical party tricks. It's the confusion of winning arguments with finding the truth.

It's a system that makes us feel smarter while making us dumber. A system that rewards the appearance of knowledge over actual understanding. A system that turns every discussion into a battle and every participant into a soldier.

The solution isn't better-arguing techniques or more good faith efforts. The solution is to escape. Read more fucking books. Write more fucking essays. Find better spaces for genuine discussion. Treat social media as entertainment rather than education.

Because, in the end, you will never win an argument online. The game itself is rigged. The only winning move is not to play.

Coda: A Personal Note

I write this as someone who spent years engaging in online debates. I have the screenshots of "victories" to prove it. I can craft the perfect quote tweet, deploy the devastating counterexample, and unleash the clever analogy that leaves opponents speechless.

And I'm dumber for all of it. Not irrevocably so, but noticeably so.

It got me 3/4 of the way to being a total idiot.

Every hour I spent "winning" debates was an hour I could have spent learning. Drawing. Listening to Husker Dü. Every clever comeback I crafted was mental energy I could have used to understand complex issues more deeply. Every "victory" reinforced habits that made me worse at finding the truth.

The Internet promised us a marketplace of ideas. Instead, we built a gladiatorial arena where ideas go to die—time to find better places to think.

And for anyone interested - water isn't turning frogs gay. All frogs are gay.

Read the whole story
mrmarchant
4 days ago
reply
Share this story
Delete

"A calculator app? Anyone could make that."

1 Share
Comments
Read the whole story
mrmarchant
4 days ago
reply
Share this story
Delete

Jane Street's Figgie card game

1 Share
Comments
Read the whole story
mrmarchant
5 days ago
reply
Share this story
Delete
Next Page of Stories