1638 stories
·
2 followers

Quoting Bryan Cantrill

1 Share

The problem is that LLMs inherently lack the virtue of laziness. Work costs nothing to an LLM. LLMs do not feel a need to optimize for their own (or anyone's) future time, and will happily dump more and more onto a layercake of garbage. Left unchecked, LLMs will make systems larger, not better — appealing to perverse vanity metrics, perhaps, but at the cost of everything that matters.

As such, LLMs highlight how essential our human laziness is: our finite time forces us to develop crisp abstractions in part because we don't want to waste our (human!) time on the consequences of clunky ones.

Bryan Cantrill, The peril of laziness lost

Tags: bryan-cantrill, ai, llms, ai-assisted-programming, generative-ai

Read the whole story
mrmarchant
17 minutes ago
reply
Share this story
Delete

That’s a Skill Issue

1 Share

I quipped on BlueSky:

It’s interesting how AI proponents are often like "skill issue" when the LLM doesn't work like someone expects.

Whereas when human-centered UX people see someone using it wrong, they're like "skill issue on us, the people who made this"

This is top of mind because I’ve been working with Jan Miksovsky on his project Web Origami and he exemplified this to me recently.

I was working with some part of Origami and I was “holding it wrong”. I kept apologizing for my misunderstanding and misuse. And Jan — rather than being like “Yeah, that’s a skill issue on your part, but you’ll get there” — his posture as tool-maker was one of introspection. He took the time to consider that perhaps the technology he was building was not properly aligning with my expectations as a user (or human-centered factors more generally). And he graciously explained that perspective to me, making me feel — well, not like an idiot.

My inability to find the results others claim with AI often has me saying either 1) “these claims are obviously BS”, or 2) “I guess it’s a skill issue on my part”.

And it kinda sucks to be saying (2) to yourself all the time, regardless of the technology.

A tech-centered approach treats the technology as a fixed point: if you don’t get what you want, you’re not using it right. The burden is entirely on you, the user, to learn the technology’s language.

Whereas a human-centered approach flips that: the technology exists to serve people as they actually are, not as we wish them to be. Confusion is allowed to be seen as a design failure, not a user failure.

What’s interesting is I think a lot of __insert technology here__ advocates would likely claim they’re “human-centered”. But when the response to failure is “learn the tech better”, it introduces a skill ceiling which naturally creates a priesthood of people who are “in-the-know” on how to make a technology work with the right incantation.

I’ve used AI as an example in this post, but it’s not really about AI specifically. This seems to be generally applicable, AI is just the current flavor.

I don’t have a big takeaway here. Just reflecting.

I love human-centered technology and technologists.


Reply via: Email · Mastodon · Bluesky

Read the whole story
mrmarchant
52 minutes ago
reply
Share this story
Delete

The Conceptual Understanding Fixation in Math

1 Share

Barry Garelick taught 7th and 8th grade math as a second career after retiring from the federal government where he worked in environmental protection. He majored in math at University of Michigan. He is the author of several books on math education, including “Traditional Math,” and "Out on Good Behavior: Teaching math while looking over your shoulder,” both published by John Catt. He and his wife reside in the central coast area of California.

We are excited to reprint this 2023 piece from his Traditional Math Substack, from 2023, which will remain relevant so long as reformers continue to insist that conceptual understanding must precede procedural fluency.

The Schoolhouse is a series from Education Progress featuring articles for and from teachers, parents, education officials, and others working in the education system.


Conceptual understanding in math has served as a dividing line between those who teach in a conventional or traditional manner, and those who advocate for progressive techniques. (I am a middle school math teacher and in the former camp.) Among other things, the progressivists frequently argue that understanding a procedure or algorithm must precede using or applying the procedure/algorithm itself.

Arguments made in support of the above statement not only border on the ridiculous, but often cross it. For example, a teacher stressing over how to assess understanding with students learning to add and subtract told me: “When ‘your work’ consists of counting and adding and subtracting, there isn’t a whole lot of work ‘to show.’ At these basic levels there has to be another way to ascertain whether a student understands these basic concepts in a meaningful way.” So if a student consistently gets addition and subtraction problems correct and applies them to solve problems, what basic concepts does she feel the student lacks? Apart from such an extreme, one of the most popular arguments that has emerged as the poster child for the reform math movement’s push for understanding is the “invert and multiply” rule for fractional division. They argue that students know how to use the rule but have no idea why it works and that such conceptual understanding always helps students to solve problems. What is frequently left out of such discussions is that the teaching of fraction operations is not devoid of understanding; fractional division is presented in the context of what such operation represents, and what types of problems can be solved using it.

A discussion I once had with a math reformer about this provides an accurate picture of the two sides of the “understanding” issue. I said to consider two students solving the following math problem: How many 2/3-ounce servings of yogurt are contained in a 3/4-ounce container? One student knows why the invert-and-multiply rule works and the other doesn’t. Both students solve the problem correctly. I maintain that I cannot tell which student knows why the rule works and which one doesn’t. What I do know is that both understand what fractional division represents, and how to use it to solve problems.

The math reformer responded that the student who did not know why the invert-and multiply-rule works “obviously” does not understand fractional division. I failed to understand that reasoning, but I have heard variations of it through the years. Generally it goes like this: “Students who fail to understand a concept are unable to know how to use it or build upon it. They will end up with misconceptions that can go undetected for months or years.”

Informing math teaching with this kind of thinking can result (and often does result) in holding up a student’s development when they are ready to move forward. Students who show mastery of procedure but cannot explain the concepts behind them are viewed as “math zombies,” to use a phrase coined by a math teacher clearly in the “students must understand or they will die” camp. A math teacher I know who is not in that camp responds to such views by stating that “Worrying about math zombies is like worrying that your football players are too good at passing the ball — on the basis that their positional play is no better than the rest of the team, and therefore they obviously don’t understand what they are doing when they pass beautifully.” In this article, I hope (but realistically do not expect) to put the arguments about understanding to rest. Or at least place them in a conceptual context.

Important Caveat and Disclosure

I will state at the outset that I, like many teachers, do in fact teach the underlying concepts for algorithms, procedures, and problem solving strategies. What I don’t do is obsess over whether students have true understanding. And I don’t stop them from using a procedure or algorithm if they don’t “understand” it.

Conceptual understanding and procedural fluency often work in tandem — sometimes with understanding coming first, sometimes later. One feeds the other, and usually after a person has more mathematical tools and procedures that make understanding more accessible. (Case in point: many procedures and rules of arithmetic are easier to understand once one has a facility with algebra and symbolic manipulation.)

And sometimes, people can proceed without ever understanding a particular concept.

What is Understanding?

How one defines mathematical understanding is a large part of the problem. There is no one fixed meaning. Does it mean to know the definition of something? In freshman calculus, students learn an intuitive definition of limits and continuity which then allows them to learn the powerful applications of same; i.e. taking derivatives and finding integrals. It isn’t until they take more advanced courses (e.g., real analysis) that they learn the formal definition of limits and continuity and accompanying theorems. Does this mean that they don’t understand calculus?

Does understanding mean transferability of concepts? Or, as a teacher I had in ed school put it: “What happens when students are placed in a totally unfamiliar situation that requires a more complex solution? What happens when we get off the ‘script’?” Dan Willingham, a cognitive scientist who teaches at University of Virginia calls being able to transfer knowledge to new situations “flexible knowledge.” There is no simple path of understanding first and then procedural skills — and no simple path to flexible knowledge. Willingham explains that it is unlikely that students will make such knowledge transfers readily until they have developed true expertise. Understanding is an important goal of education, he argues, “but if students fall short of this, it certainly doesn’t mean that they have acquired mere rote knowledge and are little better than parrots.” Rather, they are making the small steps necessary to develop better mathematical thinking. Simply put, no one leaps directly from novice to expert.

Levels of Understanding

There are different levels of understanding. One can operate at a very basic level of understanding that grows over time. While some basic level are thought of as “rote memorization,” lower-level procedural skills inform higher-level understanding skills in tandem. Reform math ignores this relationship and assumes that if a student cannot explain in writing a process used to solve a problem, that the student lacks understanding. Testing students for understanding in this manner, particularly in the K-8 grades, will often end up with students parroting explanations that they believe the teacher wants to hear — thus demonstrating a “rote understanding.” How is understanding best measured, then? I maintain that understanding is not tested by words, but by whether the student can do the problems. At the K-12 levels, understanding is best measured by the proxies of procedural fluency and factual mastery. The mastery serves as evidence that higher skills grow out of lower ones. I expect that this last statement will raise hackles on those who work within the educationist domain and try to build into their studies a confirmation that higher-order thinking is at odds with lower procedural skills, and that focusing on procedures prevents understanding.

Math is not taught in a vacuum, in which students are told “Do this, and never mind what it means.” When students learn about multiplication, they are shown that 3 + 3 + 3 + 3 can represent four groups of three things, or 4 x 3. For fractional division, students are shown first what it means to divide an orange, say, into halves, quarters, and so forth. That context is further extended to include fractions divided by fractions; e.g., how many 3/8-ounce servings are in a 15/16-ounce container of yogurt.

In teaching math, we teach a procedure within a context as the examples above illustrate. While there are some concepts that a student may not understand, there are still connections that students make to previously learned material and contexts which serve to inform a recently learned procedure — and ultimately may lead to further understanding. Efrat Furst, a cognitive neuroscientist who designs and teaches research-based, classroom-oriented curriculum for educators and students, addresses this. She writes:

Memorization usually means the ability to recite certain facts like “four times three equals twelve” — a student that is able to do that is not [yet] considered to demonstrate an understanding of multiplication. However, according to the formulation above, the student does understand “four times three” at a basic level that allows effective communication in a specific context (i.e., answering a question in a math quiz). To create a higher level of understanding, additional concrete examples are required (e.g., “Jess has three baskets, four balls in each”) as well as explicit connection to the new concept (“so we can say Jess has four balls multiplied by three”). By adding more familiar (concrete) examples to demonstrate the meaning of the concept, we can establish a higher level of meaning for “Multiplication.”


One proxy that teachers use for understanding and transfer of knowledge is how well students can do all sorts of problem variations. A student in my seventh grade math class recently provided an example of this. As an intro to a lesson on complex fractions, I announced that at the end of the lesson they would be able to do the following problem:

The boy raised his hand and said “Oh, I know how to solve that.” I recognized this as a “teaching moment” and said “OK, go for it.” He narrated step by step what needed to be done: “You flip the -3/5 to become -5/3 and multiply and you get -5/4. Then on the bottom you change 2 1/3 to 7/3 and multiply it by -3/4 to get - 21/12. So then you have -5/4 on top and -21/12 on the bottom, and you divide them. So -5/4 ÷ -21/12 is the same as -5/4 x -12/21. When you get a positive, and the answer is 5/7.” And that happens to be the correct answer.

He had certainly never seen this exact same problem before. And while he did not know why the invert and multiply rule worked, nor could he explain why multiplying two negatives yield a positive product, he was able to orally dictate the method, taking it apart mentally and explaining it verbally. He put together basic skills that he learned and used reasoning to see how they fit together in order to solve a more complex problem.

Ending the Understanding Fixation

The belief that teaching procedures prior to understanding will result in “math zombies” is entrenched in educational culture. The people pushing these ideas view the world through an adult lens which they’ve acquired through the very practices that they feel do not work. They become angry that their teachers (supposedly) didn’t explain all these things to them and are certain that they would have liked math more and done better if only their teachers would have focused on understanding. Their views and philosophies are taken as faith by school administrations, school districts and many teachers — teachers who have been indoctrinated in schools of education that teach these methods.

These ideas are so entrenched that even teachers who oppose such views feel guilty when teaching in the traditional manner so reviled by well-intentioned reformers. Given that today’s employers are complaining over the lack of basic math skills their recent college graduate employees possess, the fixation on conceptual understanding that prevails in the early grades has created a poster child in which “understanding” foundational math is often not even “doing” math at all.

Join the Center for Educational Progress and receive all our content — and thanks to all our amazing paid subscribers for their support.


Related Articles

Read the whole story
mrmarchant
8 hours ago
reply
Share this story
Delete

Your AWS Certificate Makes You an AWS Salesman

1 Share

I must have been the last developer still confused by the AWS interface. I knew how to access DynamoDB, that was the only tool I needed for my daily work. But everything else was a mystery. How do I access web hosting? If I needed a small server to host a static website, what service would I use? Searching for "web hosting" inside the AWS console yielded nothing.

After digging through the web, I found the answer: an Elastic Cloud Compute instance, better known as EC2. I learned that I could use it under the "Free Tier." Amazon offers free tiers for many services, but figuring out the actual cost beyond that introductory period requires elaborate calculation tools. In fact, I’ve often seen independent developers build tools specifically to help people decipher AWS pricing

If you want to use AWS effectively, it seems the only path is to get certified. Companies send employees to conferences and courses to learn the platform. I took some of those courses and they taught me how to navigate the interface and build very specific things. But that skill isn't transferrable. In the course, I wasn't exactly learning a new engineering skill. Instead, I was learning Amazon.

Amazon has created a complex suite of tools that has become the industry standard. Hidden within its moat of confusion, we are trained to believe it is the only option. Its complexity justifies the high cost, and the Free Tier lures in new users who settle into the idea that this is just "the way" to do web development.

When you are presented with a simple interface like DigitalOcean or Linode and a much cheaper price tag, you tend to think that something is missing. Surely, a cheaper, simpler service must lack half the features, right? The reality is, you don't need half the stuff AWS offers. Where other companies create tutorials to help you build, Amazon offers certificates. It is a powerful signal for enterprise legitimacy, but for most developers, it is overkill.

This isn't to say AWS is "bad," but it obscures the reality of running a web service. It is much easier than it seems. There are hundreds of alternatives for hosting. You can run your services reliably on a VPS without ever breaking the bank.

Most web programming is free, or at the very least, affordable.

Read the whole story
mrmarchant
11 hours ago
reply
Share this story
Delete

Feldspars

1 Share

 

Returning from a trip to New Mexico to explore some Puebloan ruins, I picked up this beautiful chunk of labradorite in the town of Quartzsite. This mineral creates an eerie blue shimmer in the sunlight: a phenomenon called ‘labradorescence’. Reading up on it, I discovered it’s a form of feldspar.

60% of the Earth’s crust is feldspar, and I know so little about this stuff! It turns out there are 3 fundamental kinds:

orthoclase is potassium aluminosilicate
albite is sodium aluminosilicate
anorthite is calcium aluminosilicate

Then there are lots of feldspars that contain different amounts of potassium, sodium and calcium. We get a triangle of feldspars with orthoclase, albite and anorthite at the corners. You can find labradorite on this triangle:

But not all points in this triangle are possible kinds of feldspar! There’s a big region called the ‘miscibility gap’, where as you cool the molten mix it separates out. Apparently this is because the radius of calcium is too much bigger than that of potassium for them to get along. Sodium has an intermediate radius so it gets along with either calcium or potassium.

And there are also subtler issues. When you cool down the feldspar called labradorite, it separates out a little, forming tiny layers of two different kinds of stuff. When the thickness of these layers is the wavelength of visible light, you get a weird optical effect: labradorescence! You really need a movie to see the strange shimmer as you turn a piece of labradorite in the sunlight.

In fact there are 3 kinds of feldspar that separate out slightly as they cool and harden, forming thin alternating layers of two substances:

• The ‘peristerite gap’ produces layers in feldspars with 2-16% anorthite and the rest albite: these layers create the beauty of moonstone!

• The ‘Bøggild gap’ produces layers in feldspars with 47-58% anorthite and the rest albite: these are labradorites!

• The ‘Huttenlocher gap’ produces layers in feldspars with 67-90% anorthite and the rest albite: these are called ‘bytownites’. For some reason these layers do not seem to produce an interesting visual effect. Maybe their thickness is too far from the wavelength of visible light.

All these gaps are ‘miscibility gaps’: that is, feldspars with these concentrations of anorthite and albite are unstable: they want to separate out. That’s why they form layers.

The physics and math of all this stuff is fascinating. Crystals try to do whatever it takes to minimize free energy, which is energy minus entropy times temperature. That’s why many feldspars have different high- and low-temperature forms. But sometimes when molten rock cools quickly, it doesn’t have time to reach its free energy minimizing state.

For feldspar all of these issues are complex, because feldspar crystals are complicated structures:

Aluminum and silicon have to be distributed among the corners of the tetrahedra here, and there are various ways to do this. The distribution is determined by the relative amounts of potassium, sodium and calcium, which are the white balls. The distribution of aluminum and silicon in turn controls the symmetry of the crystal, which can be either ‘monoclinic’ or the less symmetrical ‘triclinic’.

The picture here shows the difference between monoclinic and triclinic crystals:

But the picture doesn’t fully capture the symmetry group of an actual crystal—because there’s more to a crystal than just a shape of a parallelipiped! There may be the same atoms at all corners of the parallelipiped, or not, and there may also other atoms not on the corners.

Let’s get into a bit of the math.

The symmetry group G of a crystal, called its ‘space group’, fits into a short exact sequence:

0 → T → G → P → 1

where T ≅ ℤ³ is the group of translational symmetries and P is the group of symmetries that fix a point, called the ‘point group’. This sequence may or may not split! It splits iff G is a semidirect product of P and T.

For a triclinic crystal, there are only two possible space groups G, and both are semidirect products. P is either trivial or ℤ/2, acting by negation.

For a monoclinic crystal, there are 3 choices of the point group P as a subgroup of O(3):

• P = ℤ/2 (a single 2-fold rotation)
• P = ℤ/2 (a single reflection)
• P = ℤ/2 × ℤ/2 (generated by a 2-fold rotation and inversion
(𝑥,𝑦,𝑧) ↦ -(𝑥,𝑦,𝑧): their product is a reflection).

For each choice of P there are 2 fundamentally different choices of lattice T ≅ ℤ³ it can act on. One is made up of copies of the parallelipiped I showed you. The other is twice as dense; then we call the lattice ‘base-centered monoclinic’:

So, we get 3 × 2 = 6 space groups G that are semidirect products.

But there are 7 other non-split extensions! These other 7 give nontrivial elements of the cohomology group H²(P, T). It’s not obvious that there are just 7 options. Thus, the hardest part of the classification of all 13 monoclinic space groups is essentially the computation of H²(P, ℤ³) for all 6 choices of groups P and their actions on ℤ³.

I knew that cohomology rocks. But it turns out cohomology helps classify rocks!

Now, which of these various groups are symmetry groups of feldspars?

Apparently all the feldspars in the triangle have just two different symmetry groups:

• For the monoclinic feldspars (including sanidine, orthoclase, and high-temperature albite), the crystal has a 2-fold rotational symmetry, a mirror plane, and inversion symmetry

(𝑥,𝑦,𝑧) ↦ -(𝑥,𝑦,𝑧).

The point group is the Klein four-group ℤ/2 × ℤ/2. The lattice is base-centered monoclinic, so there’s an extra translational symmetry shifting by half a cell diagonally across one face of the parallelipiped.

• For the triclinic feldspars (including microcline, low-temperature albite, and anorthite), the only symmetry beyond translation is inversion. So the point group is just ℤ/2. And there are no extra generators of translation symmetry beyond the three edges of the parallelipiped.

Alas, each of these space groups G is the semidirect product of their point group P and their translation symmetry group T ≅ ℤ³. So, no interesting cohomology classes show up!

Nontrivial cohomology classes show up only in crystals where you can’t cleanly separate the translations from the symmetries that fix a point of the crystal. This happens when your crystal has ‘screw axes’ or ‘glide planes’. A screw axis is an axis where you’ve got a symmetry of translating along that axis, but only if you also rotate around it:

A glide plane is a plane where you’ve got a symmetry of translating along that plane, but only if you also reflect across it:

But wait! There’s a rarer kind of feldspar made with barium. It’s called celsian, after Anders Celsius, the guy who invented the temperature scale. Chemically it’s barium aluminosilicate. And its crystal structure has both screw axes and glide planes! So its space group G is not a semidirect product! It’s an extension of ℤ³ by the point group P = ℤ/2 × ℤ/2 that gives a nonzero element of H²(P, ℤ³). See the end of this post for some details.

All this is lots of fun to me: you start with a pretty rock, and before long you’re doing group cohomology. But the classification of symmetry groups is just the start. For mathematical physicists, one fun thing about feldspars is their phase transitions, especially the symmetry-breaking phase transition from the more symmetrical monoclinic feldspars to the less symmetrical triclinic ones! There’s a whole body of work—by Salje, Carpenter, and others—applying Landau’s theory of symmetry-breaking phase transitions to map out the space of different possible feldspar crystals! Here’s one way to get started:

• Ekhard Salje, Application of Landau theory for the analysis of phase transitions in minerals, Physics Reports 215 (1992), 49–99.

Even if you don’t particularly care about feldspars, there are a lot of good general principles of physics to learn here!

Details

Let me sketch out why barium aluminosilicate, or celsian, has a space group G that’s described by a non-split short exact sequence:

0 → T → G → P → 1

Its point group is P = {e, r, m, i} ≅ ℤ/2 × ℤ/2, where we can take r to be a 180° rotation about the y axis and m to be a reflection that negates the y coordinate, so that i = rm is inversion. In coordinates:

r acts as (x, y, z) ↦ (−x, y, −z)
m acts as (x, y, z) ↦ (x, −y, z)
i acts as (x, y, z) ↦ (−x, −y, −z)

We can take the translation lattice T ≅ ℤ³ to be the lattice generated by

f₁ = (1,0,0), f₂ = (0,1,0), f₃ = (½,½,½)

Note that (0,0,½) is not in T.

To compute the 2-cocycle we need a set-theoretic section s: P → G. We choose

s(e) = identity
s(m) = a glide reflection: (x, y, z) → (x, −y, z + ½)
s(i) = inversion: (x, y, z) → (−x, −y, −z)
s(r) = s(i)·s(m): (x, y, z) → (−x, y, −z + ½)

As usual, the 2-cocycle c: P2 → G is defined by

c(g,h) = s(g)·s(h)·s(gh)⁻¹

The interesting value is c(m, m): the glide composed with itself gives (x, y, z) → (x, −y, z+½) → (x, y, z+1), so s(m)² = translation by (0, 0, 1), while s(m²) = s(e) is the identity. Thus c(m, m) = (0, 0, 1). The other values are trivial: c(i, i) = 0, c(r, r) = 0.

Now, is this cocycle nontrivial in H²(P, T)? It would be trivial if we could find a different section that makes the cocycle zero—that is, find a function b: P → T such that replacing s(g) with s'(g) = s(g) + b(g) makes

c'(g,h) = s'(g)·s'(h)·s'(gh)⁻¹

be the identity for all g,h. I will spare you the calculation proving this is impossible. The idea is simply this: the reflection m squares to the identity in the point group, but no matter how we choose b, s'(m) is a glide reflection, so it squares to a nontrivial translation. On the other hand, s'(m2) is trivial since m2 is, so

c'(m,m) = s'(m)·s'(m)·s'(m2)⁻¹

is nontrivial.





Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Lighter, not faster

1 Share

I keep hearing variants of this complaint lately:

“This (tool / workflow / service / slot machine) is slower than me doing it manually.

Therefore, it's not worth using.”

These people are missing the point. Speed is easy to measure - that’s great. But focusing on speed overlooks the importance of subjective effort and mental load.

Let's talk about grocery stores, naval signaling flags, and the value beyond time saved.

The grocery store self-checkout

In the grocery store, do you choose a human cashier or the self-checkout machine?

People who prefer self-checkout often believe that it's faster. But in my highly-scientific study (loitering in the soup aisle with a stopwatch, n=24), the fastest self-checkout user was only equal in speed to the average cashier in scanning items. Once you add in the time to bag items and pay (not to mention "unexpected item in bagging area"), most people have no chance to outpace a human cashier.

The true value of the self-checkout is to offload social effort, the weight of interaction. Arthur Schopenhauer lived too early to scan his own groceries, but he did write, "A man can be himself only so long as he is alone... for it is only when he is alone that he is really free."

Artist’s depiction

Decoding naval signaling flags (semaphore)

Quick, what is this sailor saying to you?

Image via Wikimedia Commons

It's flag semaphore for the letter "U", of course. One weekend, I was tired of staring at the lookup table for flag semaphore (a common cipher used in puzzle hunts), so I made an interactive graphical tool to help me decode it.

Try it out here: Semaphore Decoder

It worked great and I felt that it was way faster, just like our self-checkout users above. But after a highly-scientific evaluation (decoding four phrases from my friend Johnny), I was surprised to learn the decoder was only equal in speed to the lookup table.

The true value of the semaphore decoder is to offload cognitive effort. Instead of burning mental energy trying to match a shape in a lookup table, I can mechanically use my tool to grind through the rote decoding process - no faster, but still easier.

That means I'm free to focus my efforts on more interesting, fun, and challenging aspects of the puzzle. I keep more energy for the next puzzles in the hunt.

Lighter burdens make your journey feel faster

Via my third and final highly-scientific study (my personal vibes), I've come to believe we fixate on speed and time measures for two reasons:

  1. "Time is money" is a pervasive metaphor in our culture.

  2. It's easy to measure and compare.

But even if a new approach is equally slow or even slower, the value of reducing effort is real.

I've previously written a bit on the history of the command bar/palette seen in apps like VS Code, Notion, and Slack. That’s a UI pattern that is actually slower than dedicated keyboard shortcuts, but it provides a better user experience by providing better discoverability and reducing cognitive load.

The self-checkout is slower, but I can relax a little. The semaphore decoder isn't any faster, but I feel less mentally tired. And the command bar is slower, but I can always find what I was looking for.

Sure, if all things are equal, I prefer faster. Who wouldn’t? But if you only prioritize hard numbers and squeezing out every moment of savings, you are going to miss the opportunity to make your effort lighter - and relax for the same result.

I like the tools that make my work lighter, not just faster.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories