1346 stories
·
1 follower

You Should Use Mini Whiteboards

1 Share

If there is one teaching tool I recommend, it’s mini whiteboards. Mini whiteboards are the best way to see what students understand and adjust my teaching on a minute-to-minute basis. That change didn’t happen overnight, though. It took me a few years of trial and error to get my system right. This is my guide to using mini whiteboards in math class.

The Basics

Here’s the basic idea. Every student has a mini whiteboard and a marker. You ask a question — maybe you write it on the board, or project it, or something else. Each student writes their answer on their mini whiteboard. Then, on my signal, students hold their whiteboards up facing me so I can see each answer.

This is the absolute best way to check for understanding. In very little time, I can sample the entire class and get a sense of what students know and don’t know. The activity is flexible: I can ask as many or as few questions as I want, I can adjust in response to student answers, and we can do a quick reteach and practice on the whiteboards if necessary.

One thing to emphasize right off the bat. I love mini whiteboards. You might assume I spend large chunks of time using them. I don’t. The majority of my class students are solving problems using pencil and paper, or we are talking about the math they’ve done on pencil and paper. In a typical class I will do two rounds of questions on mini whiteboards, each 1-5 questions long. Sometimes it’s more, sometimes it’s less. This isn’t the main way that students do math in my class. Mini whiteboards are a bit slow: I ask a question, then I wait until every student has a chance to answer, then students hold their boards up, then I take a moment to scan, and then we are on to the next question. Nothing wrong with going slow sometimes, but paper and pencil is much more efficient if I want to maximize the number of problems students solve. I pick a few well-chosen mini whiteboard questions to help me make decisions about what to do next, when to move on, and when to intervene. Then we put whiteboards away and get back to pencil and paper.

When to Use Them

Here are a bunch of places I use mini whiteboards in my class. I don’t do all of these in a single class, though over the course of a week I might use each of these at least once.

  • Prerequisite knowledge check. In every lesson, there is prerequisite knowledge that it’s helpful for students to know. If we’re graphing a proportional relationship, they should know how to plot points on a coordinate plane. If we’re solving two-step equations, they should know how to solve one-step equations. If we’re solving complementary angle problems, they should know that a right angle measures 90 degrees. Mini whiteboards are a great way to check these skills toward the beginning of the lesson. Pick a few prerequisite skills and see whether students have those skills down. If they’re shaky I give a quick reminder and we do some extra practice on mini whiteboards.

  • Check for understanding from a previous lesson. Do students remember what they learned yesterday? Sometimes yesterday’s lesson is a prerequisite skill, but other times it isn’t. Either way, it’s worth checking whether students remember yesterday’s lesson so I know if I need to revisit those concepts in the future. My experience is that students remember far less the day after a lesson than I would like. A quick check holds me accountable and points me to the skills we need to spend more time on.

  • Check for understanding before independent practice. Students are about to start some independent practice. Are they ready? Mini whiteboards are the best way to check. Ask a few questions similar to the independent practice questions and see how many students get them right. I have two possible follow-ups. If a large portion of the class gets it wrong, I need to stop and do a full-class reteach. If most students get it right, I can jot down the names of students who made mistakes on a post-it and check in with them individually as we start independent practice.

  • Check for understanding of atoms. This is really a topic for a longer post, but the short version: divide your objective into the smallest possible steps, and teach one step at a time. This often means you need to teach multiple “atoms” — small steps on the way to a larger objective. Mini whiteboards are a great way to check understanding of those atoms and make sure they are secure before jumping into a larger skill.

  • Stamp after a discussion. Let’s say I notice a common mistake. A bunch of students say that -4 + -5 = 9. (This is a common mistake after introducing multiplication with negatives, as students apply the rule for multiplication to addition.) So we discuss it as a class, visualize it on the number line, and talk about the difference between addition and multiplication. That discussion is nice, but it often doesn’t stick. After the discussion I have students grab mini whiteboards, and we do a few quick questions to stamp the learning.

  • Step-by-step. Let’s say we’re doing a multi-step problem. Mini whiteboards are a nice way to work through it step-by-step. Have students complete the first step, then hold up whiteboards and check. Then repeat for the next step. This is a great tool to figure out where in a complex procedure students start to get confused.

  • Check an explanation. Maybe we’re talking about something conceptually tricky, like why multiplying three negatives results in a negative but four negatives is positive. I can ask students to write their explanation on mini whiteboards: why does multiplying four negatives, like (-3)(-2)(-10)(-1), result in a positive? I can’t read a full class of explanations all at once, but I can read a bunch and get a sense of how well students understand that idea.

  • Give students a bit of quick practice. If I realize students just need a few more questions of practice with something, beyond what I have prepared for pencil and paper that day, mini whiteboards are always there. While I usually go one question at a time, I will also occasionally give students three to five questions, have them try all the questions, and check them all at once as best as I can. This isn’t quite as powerful as a check for understanding, but it’s an efficient way to get a few quick practice questions in.

Some Questions and Answers

That was the heart of the post. Mini whiteboards are great. They’re useful for all sorts of stuff in math class. You should use them.

If you want to hear more about all the nitty gritty details of how to set up a mini whiteboard system in class, read on. Credit to Adam Boxer here. His blog posts and webinars have been a huge help in refining my mini whiteboard systems. That said, there’s no magical advice to make mini whiteboards work perfectly. It takes time and lots of little adjustments. Read everything you can find, but if in doubt give them a shot, see what works, and refine from there.

Q: This sounds really cool. Mini whiteboards must have changed your classroom overnight!

A: Not exactly. Rewind: it’s a little over three years ago. I’m in my second year teaching 7th grade math. I’m following the curriculum, so on Monday I teach Unit 2 Lesson 3, on Tuesday I teach Unit 2 Lesson 4, and so on.

It isn’t going well. Lots of students aren’t learning. I’m trying lots of things, I’m making slow progress, but that progress sure feels slow. So I start using mini whiteboards. Toward the end of each lesson I have students pull out the mini whiteboards, and I ask some check for understanding questions. And every time, half or more of the class get them wrong. There wasn’t much time left in class. Now I know much of the class is confused. What am I going to do? I don’t have time to deal with all of that.

This felt really hard. And in the long term, this type of checking for understanding drove a ton of positive changes in my teaching. I wrote a bit about this in a previous post. I won’t get into all those little details now.

But that leads me to a piece of advice. In my experience, the best place to start with mini whiteboards is a prerequisite knowledge check toward the beginning of class. It can feel discouraging to ask questions, get a lot of wrong answers, and not have the time or the tools to address it. That’s also discouraging for students and can reduce motivation over time. Starting with a prerequisite knowledge check at the beginning of class can still be discouraging, but you have way more time to respond. Even a quick reminder and a round of practice with a prerequisite skill can make a difference in that lesson. Over time you can add more mini whiteboard tools to your toolkit, but this is a good place to start.

Q: That sounds cool. I have some boring logistical questions though. How do you make sure students don’t copy each other?

A: So here’s the routine: students write their answer to a question, then they “flip and hover”: they flip the whiteboard upside down and hold it hovering above the desk. This serves two purposes: one, it reduces cheating, and two, it signals for you which students are finished.

This doesn’t reduce copying to zero. You still need to be an active teacher, stand where you can see as many students as possible, and scan the room.

Q: What do you do if students just don’t answer and leave the whiteboard blank?

A: This is really tricky. Some teachers might recommend requiring students to write a question mark or something along those lines. I don’t want to get into power struggles with students over whether they write a question mark when they’re confused. Instead, here are a few steps I take when I see a student not answering:

I try to start each round of whiteboard questions with a very simple question I know everyone can answer. This gives me a barometer: if a student isn’t answering this question, they’re not paying attention or opting out. It’s really important not to mistake opting out and being confused. If a student just doesn’t know enough to attempt the problem, that’s good data for me!

Ok but some students are still opting out. I try a bunch of little nudges to get students participating. I seat students who are more likely to opt out toward the front. I use proximity and nonverbal cues to remind them. I try to keep the success rate high — if I ask too many hard questions, students get used to not knowing the answer and are more likely to opt out.

If those nudges don’t work and I’m confident the student is opting out rather than confused, I will either hold the student after class or have them come in for a few minutes at lunch. The logic is simple: you weren’t answering questions, so you must need some extra help. I try not to frame it punitively. If they do need extra help, then they get it! If they don’t, they generally don’t like this very much and will be more likely to participate. If all of that doesn’t work, I contact home.

In all of this I have to distinguish between students who are confused, and students who are opting out. If a student is confused, I offer them support and make sure it isn’t framed as a punishment. As long as they are trying the questions they know how to do, we’re fine. If a student is opting out, I focus on all these little nudges to try and get them participating.

This doesn’t always help for every single student. But I find that this works for the vast majority. If 1-3 students in a class participate inconsistently, it’s not the end of the world. I can work to improve their participation on an individual basis and avoid power struggles in the moment. But I need to be diligent with my nudges: if non-participation gets much higher than that, it becomes contagious and can make mini whiteboards an ineffective tool.

Q: What do you do if students are being mean to each other about wrong answers?

A: I don’t have any brilliant tricks here besides keeping a really close eye on this, in particular when you first introduce mini whiteboards. When I ask students to hold up their whiteboards I pay attention for anyone who is looking around the room at other students. Even a quick laugh at another student can shut them down for the class. The goal is to create a culture where mistakes are totally normal and we learn from them as a class, and to also catch any students who are mean right away and nip that in the bud.

Q: Tell me more about the routine and some of the details.

A: Here’s what my tables look like (two students per table).

In each bin are markers, cut-up towels for erasers, and calculators when necessary. Whiteboards are stacked underneath.

When we’re not using them, whiteboards stay stacked under the bin, and markers are off-limits. I try to be diligent about this. They are very fun toys if I’m not paying attention.

When we’re going to use them I ask students to grab a whiteboard and a marker, and to quickly test their marker. I observe to make sure students are grabbing their whiteboard and prompt any students who aren’t. I keep extra markers on my desk, and try to keep extras in bins. It only takes a few seconds for everyone to be ready to go and give out an extra marker or two where necessary. I mostly either write questions on the board or on a piece of paper under a document camera. Again, after asking a question I observe to make sure students are solving and writing their answers, and prompt students who need an extra nudge. I try to give plenty of think time. I remind students to flip their whiteboards upside down once they’ve written their answer. Then, once students have written their answers, I say, “whiteboards up, go.” On go, they hold their whiteboards up. I try to look at each whiteboard individually. This whole process takes time, which is why it isn’t ideal for extended chunks of class, but it makes up for the slower pace with the quality of information I’m getting.

I often ask questions addressing a few different goals in one session. For instance, in one round at the beginning of class, I might ask a question checking for understanding from yesterday’s lesson, two prerequisite knowledge checks for the current day’s lesson, and a follow-up for a common mistake from the Do Now. Then, if necessary, we spend a bit more time on any of those topics.

When we’re done, I say “erase, stack, markers back” (get it, it rhymes!) and students put everything back. I also actively observe here. It’s very tempting to keep the whiteboard out and draw something, or to use the marker to draw on your arm. They all have to go back before we move on. With time students get fast at this. The goal is to be around 15 seconds transitioning to or from whiteboards.

Q: Where do you get the whiteboards, markers, and erasers?

A: I found the whiteboards in my room when I started at this school. Any whiteboards will do, though I will say a regular piece of paper size works well, they don’t need to be much bigger than that. Two-sided whiteboards are nice because it’s very tempting to graffiti the back. We don’t do much graphing in 7th grade but if you teach older students there are two-sided mini whiteboards with a coordinate plane on one side. Erasers are just pieces of cut-up towel. Initially my school provided plenty of markers. The budget is a bit tighter now, and when they stopped providing markers I made a bit of a fuss. I said that science teachers get lab supplies, English teachers get books, and so on, and for me, the essential tool in my math class is mini whiteboard supplies. My principal agreed to spare a bit of money. I buy markers in bulk. I get by on <$100 of markers a year, and I bet you could get a set of whiteboards for your classroom for <$100. Your mileage will vary but I think that’s a perfectly reasonable ask for a typical school and you can often find some random underused department budget line to put it on. I recommend Expo for markers, the nice brand-name markers just last way longer than the cheap ones. I am also diligent about making the most of the markers I have. If a student says a marker is dead, I don’t just throw it out. I toss them in a pile on the windowsill. Sometimes it’s dead because a student didn’t put the cap on all the way, and giving it a rest with a cap on can bring it back to life. If the tip gets pushed in, I have a pair of needle-nose pliers to pull it back out.

Q: What do you do if students are drawing graffiti on the back, or drawing pictures of dogs instead of doing math?

A: I keep a close eye out and try to nip this in the bud early. A nice consequence here is inviting the student to come outside of class time and help clean the whiteboards or black out graffiti.

Q: Any final advice?

A: I just want to emphasize something I mentioned earlier. Mini whiteboards are great. You should use them. But this isn’t some magical teaching tool that will transform your classroom overnight. First, there are a lot of little logistical things you have to get right. You’ll learn quickly how to make sure whiteboards go away promptly, how important it is to have a high success rate so students don’t get discouraged, how to respond to students who are opting out. Second, you might get really discouraging results. I know I did. There are two main benefits of mini whiteboards. One is a check for understanding: did students understand what I just taught? Do students know what they need to know to access this lesson? That check for understanding is really valuable. The second benefit is that mini whiteboards give broader insight into whether my teaching is working. If, over and over again, students don’t remember what they were supposed to have learned, then something is wrong. It’s easy to blame the curriculum, or last year’s teachers, or the parents, or school culture, or whatever. I’ve been there. But I’ve found that there are lots of changes I can make to my teaching in that situation, and mini whiteboards serve as a rapid-fire barometer to figure out what works and what doesn’t. The second benefit has mattered more to me. It’s felt discouraging at times, and the progress is often slow, but mini whiteboards are a great way to see which changes in my teaching have helped students learn more math.

Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

In medieval France, murderous pigs faced trial and execution

1 Share

It’s a common scene in many films set in medieval Europe: a wooden cart wheeling its way through a jeering crowd of townsfolk, taking a condemned prisoner to the gallows. 

However, reality is sometimes stranger than fiction. Because sometimes the criminal wheeled about town wasn’t human. Occasionally, the prisoner at the end of the rope was a pig, hung upside down until dead. In medieval Europe, pigs went to trial—and the gallows—surprisingly often.

Most of us don’t live on farms today, so it can be easy to forget how dangerous domesticated animals can be. Cows can trample people to death, horses can deliver fatal kicks, and those are just the herbivores. Pigs, on the other hand, are omnivorous. Throughout history, this made them useful as they could be fed kitchen scraps and waste. Yet a pig allowed to wander freely could easily overpower a small child, and as a result, there are hundreds of records of pigs killing and eating children across medieval Europe. 

Medieval pigs could and would kill children

In 1379, a group of pigs in the village of Saint-Marcel-lès-Jussey in eastern France killed a swineherd’s child. In 1386, a sow in Falaise, Normandy, savaged a young boy, who died of his injuries. In 1457, a sow killed five-year-old Jehan Martin in the village of Savigny in Burgundy. Gruesomely, the sow’s six piglets were nearby, covered in blood. 

“We are used to this pink, fluffy, or quite chubby animal that would be quite slow, but pigs in the Middle Ages were much closer to the wild boar,” says Sven Gins, a historian and a researcher at the University of Groningen, as well as the author of Casting Justice Before Swine: Late Mediaeval Pig Trials as Instances of Human Exceptionalism. “So they were very fast, very strong, and they ate everything, including human meat sometimes.” 

A medieval illustration of a wild boar with large tucks and a protruding red tongue. The beast walks on green and looks slightly up.
In the Middle Ages, pigs were more like wild boars. Image: Public Domain

Some pigs even went to trial for their crimes

In France, these incidents often resulted in trials, with the pig treated almost as a human defendant. “A lot of the records are saying, ‘This pig went to jail. This pig was transported in a cart. We got an executioner from Paris, and we paid him,’” says Gins. “These are very serious legal proceedings, in many cases. Almost mundane, actually. To us, it’s sensational that they would put a pig on trial, but to people at the time, it seemed [like] an ordinary thing to do.”

Gins notes that, as wild as pig trials sound, their purpose may have been practical. “One thing that is often not mentioned is that justice in general at the time was very much focused on reconciliation between the two parties,” he says. Sometimes, all it took was a payment from one side to the other to resolve an issue. “But then if a child is killed, that’s quite major, and money isn’t always going to cut it. So in that case, it helps if the law steps in and says, ‘We’ll take over from here.’” 

Taking a pig to trial gave authorities a chance to dig deeper. “They sometimes wanted to know, was there any ill-intent present in this? If you know that a pig is dangerous, why would you let it wander about in the presence of young children? Sometimes even the parents themselves were suspect. They wanted to know if it was an unwanted child that they had left near the pigs, or if it was simply the owner who had been neglectful,” says Gins. “I would say that the court really stepped in to gain clarity and provide a coherent narrative for everyone.”

Some pig trials even went before local dukes

Sometimes, higher authorities would get involved in local pig trials. In the 1379 case, a group of pigs, some belonging to the local abbey, were charged with killing a swineherd’s son.

The abbey, Gins says, wrote to the Duke, Philip the Bold. Gins sums up the letter: “Can you please let our pigs go? Because we are sure that they were not involved in the killing. They are well-behaved pigs.” The Duke listened, and wrote a letter of pardon for the abbey’s pigs.

Related 'That Time When' Stories

Idaho once dropped 76 beavers from airplanes—on purpose

During WWII, a dress-wearing squirrel sold war bonds alongside FDR

When the U.S. almost nuked Alaska—on purpose

Andrew Jackson’s White House once hosted a cheese feeding frenzy

The space billboard that nearly happened

BOOM! That time Oregon blew up a whale with dynamite.

The radioactive ‘miracle water’ that killed its believers

During WWII, the U.S. government censored the weather

The U.S. tried permanent daylight saving time—and hated it

The 21 grams experiment that tried to weigh a human soul

There’s more to the pig trials than meets the eye

In recent centuries, writers and historians have looked back on the trials of pigs and other animals as senseless revenge by crude peasants. However, animal trials could also serve a cold political purpose for local authorities, as the right to execute criminals and even build a gallows was considered a privilege.

One homicidal pig in the 15th century, Gins notes, ended up in jail for five years before its execution. “That doesn’t scream petty rage to me. There were formal letters sent to the Duke asking, ‘Can we please build a gallows to execute this animal?’” It was quite a victory for the local lord, he adds, that Duke John the Fearless finally acquiesced. Not only did the lord get to show off his power by building a gallows of his own, but he was finally able to get the pig out of his jail and stop paying for its feed. 

Dr. Damian Kempf, a senior lecturer at the University of Liverpool, is an expert on medieval European monsters. He says animal trials were also “about restoring order when there has been chaos.” Despite popular belief, he notes, humans often weren’t put to death for crimes—such punishments were reserved for the most wicked deeds, such as infanticides. 

“For medieval people, the world was created by God in a very logical way, with animals created first, in order to serve and help human beings who were created in the image of God,” Kempf explains. A trial and public execution, even of a pig, was considered a surefire way “to restore what was broken.” A pig eating a child was an unbearable inversion of the natural order, one that courts in medieval France would not let go unpunished. 

In That Time When, Popular Science tells the weirdest, surprising, and little-known stories that shaped science, engineering, and innovation.

The post In medieval France, murderous pigs faced trial and execution appeared first on Popular Science.



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

How Much Is Eight Dollars?

1 Share

You heard me. Is eight dollars five dollars? Or is eight dollars 10 dollars? Why are you making that face right now? Stop making faces and answer the question: How much is eight bucks?

Stephen Totilo, games journalist and author of the Game File newsletter, got into the ontological nature of eight bucks in a recent conversation with Nick Kaman, co-creator of the video game Peak, a cute indie climbing adventure. Kaman and his collaborators developed a theory of spending to arrive at a decision to charge $8 for their new game. According to their theory, eight bucks is the upper threshold of five-buck-dom.

"In a player’s mind, what does it mean to spend five bucks? Well, that’s five bucks. But six bucks? Well, that’s still five bucks.

"Four bucks is also kind of five bucks," he continued. "Three bucks is two bucks. And two bucks is basically free.

"So we’ve got these tiers: You know, twelve bucks … that’s ten bucks. But thirteen bucks is fifteen bucks.

"And we found that eight bucks is still five bucks. It doesn’t become ten bucks. Seven ninety nine, that’s five bucks, right?

"So, eight bucks going to five bucks is the biggest differential we could find in pricing, so we found it very optimal."



Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

choosing learning over autopilot

1 Share

I use ai coding tools a lot. I love them. I’m all-in on ai tools. They unlock doors that let me do things that I cannot do with my human hands alone.

But they also scare me.

As I see it, they offer me two paths:

✨ The glittering vision ✨

The glittering vision is they let me build systems in the way that the version of me who is a better engineer would build them. Experimentation, iteration and communication have become cheaper. This enables me to learn by doing at a speed that was prohibitive before. I can make better decisions about what and how to build because I can try out a version and learn where some of the sharp edges are in practice instead of guessing. I can also quickly loop in others for feedback and context. All of this leads to building a better version of the system than I would have otherwise.

☠️ The cursed vision ☠️

The cursed vision is I am lazy, and I build systems of ai slop that I do not understand. There’s a lot of ink spilled about perils and pains of ai slop, especially working on a team that has to maintain the resulting code.

What scares me most is an existential fear that I won’t learn anything if I work in the “lazy” way. There is no substitute for experiential learning, and it accumulates over time. There are things that are very hard for me to do today, and I will feel sad if all of those things feel equally hard in a year, two years, five years. I am motivated by an emotional response to problems I find interesting, and I like problems that have to do with computers. I am afraid of drowning that desire by substituting engaging a problem with semi-conscious drifting on autopilot.

And part of why this is scary to me is that even if my goal is to be principled, to learn, to engage, to satisfy my curiosity with understanding, it is really easy for me to coast with an llm and not notice. There are times when I am tired and I am distracted and I have a thing that I need to get done at work. I just want it done, because then I have another thing I need to do. There are a lot of reasons to be lazy.

So I think the crux here is about experiential learning:

  • ai tools make it so much easier to learn by doing, which can lead to much better results
  • but it’s also possible to use them take a shortcut and get away without learning
    • I deeply believe that the shortcut is a trap
    • I also believe it is harder than it seems to notice and be honest about when I’m doing this

And so, I’ve been thinking about guidelines & guardrails– how do I approach my work to escape the curse, such that llms are a tool for understanding, rather than a replacement for thinking?

Here’s my current working model:

  1. use ai-tooling to learn, in loops
  2. ai-generated code is cheap and not precious; throw it away and start over several times
  3. be very opinionated about how to break down a problem
  4. “textbook” commits & PRs
  5. write my final docs / pr descriptions / comments with my human hands

The rest of the blog post is a deeper look at these topics, in a way that I hope is pretty concrete and grounded.

but first, let me make this more concrete

Things I now get to care less about:

  • the mechanics of figuring out how things are hooked together
  • the mechanics of translating pseudocode into code
  • figuring out what the actual code looks like

The times I’m using ai tools to disengage a problem and go fast are the times I’m only doing the things in this first category and getting away with skipping doing the things in the other two.

Things I cared about before and should still care about:

  • deciding which libraries are used
  • how the code is organized: files & function signatures
  • leaving comments that explain why something is set up in a way if there’s complication behind it
  • leaving docs explaining how things work
  • understanding when I need to learn something more thoroughly to get unblocked

Things I now get to care about that were expensive before:

  • more deeply understanding how a system works
  • adding better observability like nicely structured outputs for debugging
  • running more experiments

The times when I’m using ai tools to enhance my learning and understanding I’m doing the things in the latter two categories.

I will caveat that the appropriate amount of care and effort in an implementation depends, of course, on the problem and context. More is not always better. Moving slow can carry engineering risk and I know from experienced that it’s possible for a team to mistake micromanagement for code quality.

I like to work on problems somewhere in the middle of the “how correct does this have to be” spectrum and so that’s where my intuition is tuned to. I don’t need things clean down to the bits, but how the system is built matters so care is worth the investment.

workflow

Here is a workflow I’ve been finding useful for medium-sized problems.

Get into the problem: go fast, be messy, learn and get oriented

  1. Research & document what I want to build
    1. I collab with the ai to dump background context and plans into a markdown file
      1. The doc at this stage can be rough
      2. A format that I’ve been using:
        1. What is the problem we’re solving?
        2. How does it work today?
        3. How will this change be implemented?
  2. Build a prototype
    1. The prototype can be ai slop
    2. Bias towards seeing things run & interacting with them
  3. Throw everything away. Start fresh, clean slate
    1. It’s much faster to build it correctly than to fix it

Formulate a solution: figure out what the correct structure should be

  1. Research & document based on what I know from the prototype
    1. Read code, docs and readmes with my human eyes
    2. Think carefully about the requirements & what causes complication in the code. Are those hard or flexible (or imagined!) requirements?
  2. Design what I want to build, again
  3. Now would be a good time to communicate externally if that’s appropriate for the scope. Write one-pager for anyone who might want to provide input.
  4. Given any feedback, design the solution one more time, and this time polish it. Think carefully & question everything. Now is the time to use my brain.
    1. Important: what are the APIs? How is the code organized?
    2. Important: what libraries already exist that we can use?
    3. Important: what is the iterative implementation order so that the code is modular & easy to review?
  5. Implement a skeleton, see how the code smells and adjust
  6. Use this to compile a final draft of how to implement the feature iteratively
  7. Commit the skeleton + the final implementation document

Implement the solution: generate the final code

  1. Cut a new branch & have the ai tooling implement all the code based on the final spec
  2. If it’s not a lot of code or it’s very modular, review it and commit each logical piece into its own commit / PR
  3. If it is a lot of code, review it, and commit it as a reference implementation
    1. Then, rollback to the skeleton branch, and cut a fresh branch for the first logic piece that will be its own commit / PR
    2. Have the ai implement just that part, possibly guided by any ideas from seeing the full implementation
  4. For each commit, I will review the code & I’ll have the ai review the code
  5. I must write my own commit messages with descriptive trailers

One of the glittering things about ai tooling is that it’s faster than building systems by hand. I maintain that even with these added layers of learning before implementing, it’s still faster than what I could do before while giving me a richer understanding and a better result.

Now let me briefly break out the guidelines I mentioned in the intro and how they relate to this workflow.

learning in loops

There are a lot of ways to learn what to build and how to build it, including:

  • Understanding the system and integrations with surround systems
  • Understanding the problem, the requirements & existing work in the space
  • Understanding relationships between components, intended use-cases and control flows
  • Understanding implementation details, including tradeoffs and what a MVP looks like
  • Understanding how to exercise, observe and interact with the implementation

I’ll understand each area in a different amount of detail at different times. I’m thinking of it as learning “in loops” because I find that ai tooling lets me quickly switch between breadth and depth in an iterative way. I find that I “understand” the problem and the solution in increasing depth and detail several times before I build it, and that leads to a much better output.

I think there two pitfalls in these learning loops: one feeling like I’m learning when I’m actually only skimming, and the other is getting stuck limited on what the ai summaries can provide. One intuition I’ve been trying to build is when to go read the original sources (like code, docs, readmes) myself. I have two recent experiences top-of-mind informing this:

In the first experience, a coworker and I were debugging a mysterious issue related to some file-related resource exhaustion. We both used ai tools to figure out what cli tools we had to investigate and to build a mental model of how the resource in question was supposed to work. I got stuck after getting output that seemed contradictory, and didn’t fit my mental model. My coworker got to a similar spot and then took a step out of the ai tooling to go read the docs about the resource with their human eyes. That led them to understand that the ai summary wasn’t accurate: it had missed some details that explained the confusing situation we were seeing.

This example really sticks out in my memory. I thought I was being principled rather than lazy by building my mental model of what was supposed to be happening, but I had gotten mired in building that mental model second-hand instead of reading the docs myself.

In the second experience, I was working on a problem related to integrating with a system that had a documented interface. I had the ai read & summarize the interface and then got into the problem in a way similar to the first step of the workflow I described above. I was using that to formulate an idea of what the solution should be. Then I paused to repeat the research loop but with more care: I read the interface with my human eyes– and found the ai summary was wrong! It wasn’t a big deal and I could shift my plans, but I was glad to have learned to pause and take care in validating the details of my mental model.

ai-generated code is throw-away code

I had a coworker describe working with ai coding tools like working on a sculpture. When they asked it to reposition the arm, it would accidentally bump the nose out of alignment.

The way I’m thinking about it now, it’s more like: instead of building a sculpture, I’m asking it to build me a series of sculptures.

The first one is rough-hewn and wonky, but lets me understand the shape of what I’m doing.

The next one or two are just armatures.

The next one might be a mostly functional sculpture on the latest armature; this lets me understand the shape of what I’m doing with much higher precision.

And then finally, I’ll ask for a sculpture, using the vetted armature, except we’ll build it one part at a time. When we’re done with a part, we’ll seal it so we can’t bump it out of alignment.

A year ago, I wasn’t sure if it was better to try to fix an early draft of ai generated code to be better, or to throw it out. Now I feel strongly that ai-generated code is not precious, and not worth the effort to fix it. If you know what the code needs to do and have that clearly documented in detail, it takes no time at all for the ai to flesh out the code. So throw away all the earlier versions, and focus on getting the armature correct.

Making things is all about processes and doing the right thing at the right time. If you throw a bowl and that bowl is off-center, it is a nightmare to try to make it look centered with trimming. If you want a centered bowl then you must throw it on-center. Same here, if you want code that is modular and well structured, the time to do that is before you have the ai implement the logic.

“textbook” commits and PRs

It’s much easier to review code that has been written in a way where a feature is broken up into an iteration of commits and PRs. This was true before ai tooling, and is true now.

The difference is that writing code with my hands was slow and expensive. Sometimes I’d be in the flow and I’d implement things in a way that was hard to untangle after the fact.

I believe that especially if I work in the way I’ve been describing here, ai code is cheap. This makes it much easier/cheaper for me to break apart my work into ways that are easy to commit and review.

My other guilty hesitation before ai tooling was I never liked git merge conflicts and rebasing branches. It was confusing and had the scary potential of losing work. Now, ai tooling is very good at rebasing branches, so it’s much less scary and pretty much no effort.

I also think that small, clean PRs are an external forcing function to working in a way that builds my understanding rather than lets me take shortcuts: if I generate 2.5k lines of ai slop, it will be a nightmare to break that into PRs.

i am very opinionated about how to break down a problem

I’m very opinionated in breaking down problems in two ways:

  • how to structure the implementation (files, functions, libraries)
  • how to implement iteratively to make clean commits and PRs

The only way to achieve small, modular, reviewable PRs is to be very opinionated about what to implement and in what order.

Unless you’re writing a literal prototype that will be thrown away (and you’re confident it will actually be thrown away), the most expensive part about building a system is the engineering effort that will go into maintaining it. It is, therefore, very worth-while to be opinionated about how to structure the code. I find that the ai can do an okay job at throwing code out there, but I can come up with a much better division and structure by using my human brain.

A time I got burned by not thinking about libraries & how to break down a problem was when I was trying to fix noisy errors due to a client chatting with a system that had some network blips. I asked an ai model to add rate limiting to an existing http client, which it did by implementing exponential backoff itself. This isn’t a very good solution, surely we don’t need to do that ourselves. I didn’t think this one through, and was glad a coworker with their brain on caught it in code review.

i write docs & pr descriptions with my human hands

Writing can serve a few distinct purposes: one is communication, and distinct from that, one is as a method to facilitate thinking. The act of writing forces me to organize and refine my thoughts.

This is a clear smell-test for me: I must be able to write documents that explain how and why something is implemented. If I can’t, then that’s a clear sign that I don’t actually understand it; I have skipped writing as a method of thinking.

On the communication side of things, I find that the docs or READMEs that ai tooling generates often capture things that aren’t useful. I often don’t agree with their intuition; I find that if I take the effort to use my brain I produce documents that I believe are more relevant.

This isn’t to say that I don’t use ai tooling to write documents: I’ll often have ai dump information into markdown files as I’m working. I’ll often have ai tooling nicely format things like diagrams or tables. Sometimes I’ll have ai tooling take a pass at a document. I’ll often hand a document to ai tooling and ask it to validate whether everything I wrote is accurate based on the implementation.

But I do believe that if I hold myself to the standard that I write docs, commit messages, etc with my hands, I both produce higher quality documentation and force myself to be honest about understanding what I’m describing.

Conclusion

In conclusion, I find that ai coding tools give me a glittering path to understand better by doing, and using that understanding to build better systems. I also, however, think there is a curse of using these systems in a way that skips the “build the understanding” part, and that pitfall is subtler than it may seem.

I care deeply about, and think it will be important in the long-run, to leverage these tools for learning and engaging. I’ve outlined the ways I’m thinking about how to do best do this and avoid the curse:

  1. use ai-tooling to learn, in loops
  2. ai-generated code is cheap and not precious; throw it away and start over several times
  3. be very opinionated about how to break down a problem
  4. “textbook” commits & PRs
  5. write my final docs / pr descriptions / comments with my human hands
Read the whole story
mrmarchant
1 hour ago
reply
Share this story
Delete

The truth behind the 2026 J.P. Morgan Healthcare Conference

1 Share

Note: I am co-hosting an event in SF on Friday, Jan 16th.


In 1654, a Jesuit polymath named Athanasius Kircher published Mundus Subterraneus, a comprehensive geography of the Earth’s interior. It had maps and illustrations and rivers of fire and vast subterranean oceans and air channels connecting every volcano on the planet. He wrote that “the whole Earth is not solid but everywhere gaping, and hollowed with empty rooms and spaces, and hidden burrows.”. Alongside comments like this, Athanasius identified the legendary lost island of Atlantis, pondered where one could find the remains of giants, and detailed the kinds of animals that lived in this lower world, including dragons. The book was based entirely on secondhand accounts, like travelers tales, miners reports, classical texts, so it was as comprehensive as it could’ve possibly been.

But Athanasius had never been underground and neither had anyone else, not really, not in a way that mattered.

Today, I am in San Francisco, the site of the 2026 J.P. Morgan Healthcare Conference, and it feels a lot like Mundus Subterraneus.

There is ostensibly plenty of evidence to believe that the conference exists, that it actually occurs between January 12, 2026 to January 16, 2026 at the Westin St. Francis Hotel, 335 Powell Street, San Francisco, and that it has done so for the last forty-four years, just like everyone has told you. There is a website for it, there are articles about it, there are dozens of AI-generated posts on Linkedin about how excited people were about it. But I have never met anyone who has actually been inside the conference.

I have never been approached by one, or seated next to one, or introduced to one. They do not appear in my life. They do not appear in anyone’s life that I know. I have put my boots on the ground to rectify this, and asked around, first casually and then less casually, “Do you know anyone who has attended the JPM conference?”, and then they nod, and then I refine the question to be, “No, no, like, someone who has actually been in the physical conference space”, then they look at me like I’ve asked if they know anyone who’s been to the moon. They know it happens. They assume someone goes. Not them, because, just like me, ordinary people like them do not go to the moon, but rather exist around the moon, having coffee chats and organizing little parties around it, all while trusting that the moon is being attended to.

The conference has six focuses: AI in Drug Discovery and Development, AI in Diagnostics, AI for Operational Efficiency, AI in Remote and Virtual Healthcare, AI and Regulatory Compliance, and AI Ethics and Data Privacy. There is also a seventh theme over ‘Keynote Discussions’, the three of which are The Future of AI in Precision Medicine, Ethical AI in Healthcare, and Investing in AI for Healthcare. Somehow, every single thematic concept at this conference has converged onto artificial intelligence as the only thing worth seriously discussing.

Isn’t this strange? Surely, you must feel the same thing as me, the inescapable suspicion that the whole show is being put on by an unconscious Chinese Room, its only job to pass over semi-legible symbols over to us with no regards as to what they actually mean. In fact, this pattern is consistent across not only how the conference communicates itself, but also how biopharmaceutical news outlets discuss it.

Each year, Endpoints News and STAT and BioCentury and FiercePharma all publish extensive coverage of the J.P. Morgan Healthcare Conference. I have read the articles they have put out, and none of it feels like it was written by someone who actually was at the event. There is no emotional energy, no personal anecdotes, all of it has been removed, shredded into one homogeneous, smoothie-like texture. The coverage contains phrases like “pipeline updates” and “strategic priorities” and “catalysts expected in the second half.” If the writers of these articles ever approach a human-like tenor, it is in reference to the conference’s “tone”. The tone is “cautiously optimistic.” The tone is “more subdued than expected.” The tone is “mixed.” What does this mean? What is a mixed tone? What is a cautiously optimistic tone? These are not descriptions of a place. They are more accurately descriptions of a sentiment, abstracted from any physical reality, hovering somewhere above the conference like a weather system.

I could write this coverage. I could write it from my horrible apartment in New York City, without attending anything at all. I could say: “The tone at this year’s J.P. Morgan Healthcare Conference was cautiously optimistic, with executives expressing measured enthusiasm about near-term catalysts while acknowledging macroeconomic headwinds.” I made that up in fifteen seconds. Does it sound fake? It shouldn’t, because it sounds exactly like the coverage of a supposedly real thing that has happened every year for the last forty-four years.

Speaking of the astral body I mentioned earlier, there is an interesting historical parallel to draw there. In 1835, the New York Sun published a series of articles claiming that the astronomer Sir John Herschel had discovered life on the moon. Bat-winged humanoids, unicorns, temples made of sentient sapphire, that sort of stuff. The articles were detailed, describing not only these creatures appearance, but also their social behaviors and mating practices. All of these cited Herschel’s observations through a powerful new telescope. The series was a sensation. It was also, obviously, a hoax, the Great Moon Hoax as it came to be known. Importantly, the hoax worked not because the details were plausible, but because they had the energy of genuine reporting: Herschel was a real astronomer, and telescopes were real, and the moon was real, so how could any combination that involved these three be fake?

To clarify: I am not saying the J.P. Morgan Healthcare Conference is a hoax.

What I am saying is that I, nor anybody, can tell the difference between the conference coverage and a very well-executed hoax. Consider that the Great Moon Hoax was walking a very fine tightrope between giving the appearance of seriousness, while also not giving away too many details that’d let the cat out of the bag. Here, the conference rhymes.

For example: photographs. You would think there would be photographs. The (claimed) conference attendees number in the thousands, many of them with smartphones, all of them presumably capable of pointing a camera at a thing and pressing a button. But the photographs are strange, walking that exact snickering line that the New York Sun walked. They are mostly photographs of the outside of the Westin St. Francis, or they are photographs of people standing in front of step-and-repeat banners, or they are photographs of the schedule, displayed on a screen, as if to prove that the schedule exists. But photographs of the inside with the panels, audience, the keynotes in progress; these are rare. And when I do find them, they are shot from angles that reveal nothing, that could be anywhere, that could be a Marriott ballroom in Cleveland.

Is this a conspiracy theory? You can call it that, but I have a very professional online presence, so I personally wouldn’t. In fact, I wouldn’t even say that the J.P. Morgan Healthcare Conference is not real, but rather that it is real, but not actually materially real.

To explain what I mean, we can rely on economist Thomas Schelling to help us out. Sixty-six years ago, Schelling proposed a thought experiment: if you had to meet a stranger in New York City on a specific day, with no way to communicate beforehand, where would you go? The answer, for most people, is Grand Central Station, at noon. Not because Grand Central Station is special. Not because noon is special. But because everyone knows that everyone else knows that Grand Central Station at noon is the obvious choice, and this mutual knowledge of mutual knowledge is enough to spontaneously produce coordination out of nothing. This, Grand Central Station and places just like it, are what’s known as a Schelling point.

Schelling points appear when they are needed, burnt into our genetic code, Pleistocene subroutines running on repeat, left over from when we were small and furry and needed to know, without speaking, where the rest of the troop would be when the leopards came. The J.P. Morgan Healthcare Conference, on the second week of January, every January, Westin St. Francis, San Francisco, is what happened when that ancient coordination instinct was handed an industry too vast and too abstract to organize by any other means. Something deep drives us to gather here, at this time, at this date.

To preempt the obvious questions: I don’t know why this particular location or time or demographic were chosen. I especially don’t know why J.P. Morgan of all groups was chosen to organize the whole thing. All of this simply is.

If you find any of this hard to believe, observe that the whole event is, structurally, a religious pilgrimage, and has all the quirks you may expect of a religious pilgrimage. And I don’t mean that as a metaphor, I mean it literally, in every dimension except the one where someone official admits it, and J.P. Morgan certainly won’t.

Consider the elements. A specific place, a specific time, an annual cycle, a journey undertaken by the faithful, the presence of hierarchy and exclusion, the production of meaning through ritual rather than content. The hajj requires Muslims to circle the Kaaba seven times. The J.P. Morgan Healthcare Conference requires devotees of the biopharmaceutical industry to slither into San Francisco for five days, nearly all of them—in my opinion, all of them—never actually entering the conference itself, but instead orbiting it, circumambulating it, taking coffee chats in its gravitational field. The Kaaba is a cube containing, according to tradition, nothing, an empty room, the holiest empty room in the world. The Westin St. Francis is also, roughly, a cube. I am not saying these are the same thing. I am saying that we have, as a species, a deep and unexamined relationship to cubes.

This is my strongest theory so far. That the J.P. Morgan Healthcare conference isn’t exactly real or unreal, but a mass-coordination social contract that has been unconsciously signed by everyone in this industry, transcending the need for an underlying referent.

My skeptical readers will protest at this, and they would be correct to do so. The story I have written out is clean, but it cannot be fully correct. Thomas Schelling was not so naive as to believe that Schelling points spontaneously generate out of thin air, there is always a reason, a specific, grounded reason, that their concepts become the low-energy metaphysical basins that they are. Grand Central Station is special because of the cultural gravitas it has accumulated through popular media. Noon is special because that is when the sun reaches its zenith. The Kaaba was worshipped because it was not some arbitrary cube; the cube itself was special, that it contained The Black Stone, set into the eastern corner, a relic that predates Islam itself, that some traditions claim fell from heaven.

And there are signs, if you know where to look, that the underlying referent for the Westin St. Francis status being a gathering area is physical. Consider the heat. It is January in San Francisco, usually brisk, yet the interior of the Westin St. Francis maintains a distinct, humid microclimate. Consider the low-frequency vibration in the lobby that ripples the surface of water glasses, but doesn’t seem to register on local, public seismographs. There is something about the building itself that feels distinctly alien. But, upon standing outside the building for long enough, you’ll have the nagging sensation that it is not something about the hotel that feels off, but rather, what lies within, underneath, and around the hotel.

There’s no easy way to sugarcoat this, so I’ll just come out and say it: it is possible that the entirety of California is built on top of one immensely large organism, and the particular spot in which the Westin St. Francis Hotel stands—335 Powell Street, San Francisco, 94102—is located directly above its beating heart. And that this is the primary organizing focal point for both the location and entire reason for the J.P. Morgan Healthcare Conference.

I believe that the hotel maintains dozens of meter-thick polyvinyl chloride plastic tubes that have been threaded down through the basement, through the bedrock, through geological strata, and into the cardiovascular system of something that has been lying beneath the Pacific coast since before the Pacific coast existed. That the hotel is a singular, thirty-two story central line. That, during the week of the conference, hundreds of gallons of drugs flow through these tubes, into the pulsating mass of the being, pouring down arteries the size of canyons across California. The dosing takes five days; hence the length of the conference.

And I do not believe that the drugs being administered here are simply sedatives. They are, in fact, the opposite of sedatives. The drugs are keeping the thing beneath California alive. There is something wrong with the creature, and a select group of attendees at the J.P. Morgan Healthcare Conference have become its primary caretakers.

Why? The answer is obvious: there is nothing good that can come from having an organic creature that spans hundreds of thousands of square miles suddenly die, especially if that same creatures mass makes up a substantial portion of the fifth-largest economy on the planet, larger than India, larger than the United Kingdom, larger than most countries that we think of as significant. Maybe letting the nation slide off into the sea was an option at one point, but not anymore. California produces more than half of the fruits, vegetables, and nuts grown in the United States. California produces the majority of the world’s entertainment. California produces the technology that has restructured human communication. Nobody can afford to let the whole thing collapse.

So, perhaps it was decided that California must survive, at least for as long as possible. Hence Amgen. Hence Genentech. Hence the entire biotech revolution, which we are taught to understand as a triumph of science and entrepreneurship, a story about venture capital and recombinant DNA and the genius of the California business climate. The story is not false, but incomplete. The reason for the revolution was, above all else, because the creature needed medicine, and the old methods of making medicine were no longer adequate, and someone decided that the only way to save the patient was to create an entire industry dedicated to its care.

Why is drug development so expensive? Because the real R&D costs are for the primary patient, the being underneath California, and human applications are an afterthought, a way of recouping investment. Why do so many clinical trials fail? For the same reason; the drugs are not meant for our species. Why is the industry concentrated in San Francisco, San Diego, Boston? Because these are monitoring stations, places where other intravenous lines have been drilled into other organs, other places where the creature surfaces close enough to reach.

Finally, consider the hotel itself. The Westin St. Francis was built in 1904, and, throughout its entire existence, it has never, ever, even once, closed or stopped operating. The 1906 earthquake leveled most of San Francisco, and the Westin St. Francis did not fall. It was damaged, yes, but it did not fall. The 1989 Loma Prieta earthquake killed sixty-three people and collapsed a section of the Bay Bridge. Still, the Westin St. Francis did not fall. It cannot fall, because if it falls, the central line is severed, and if the central line is severed, the creature dies, and if the creature dies, we lose California, and if we lose California, our civilization loses everything that California has been quietly holding together. And so the Westin St. Francis has hosted every single J.P. Morgan Healthcare Conference since 1983, has never missed one, has never even come close to missing one, and will not miss the next one, or the one after that, or any of the ones that follow.

If you think about it, this all makes a lot of sense. It may also seem very unlikely, but unlikely things have been known to happen throughout history. Mundus Subterraneus had a section on the “seeds of metals,” a theory that gold and silver grew underground like plants, sprouting from mineral seeds in the moist, oxygen-poor darkness. This was wrong, but the intuition beneath it was not entirely misguided. We now understand that the Earth’s mantle is a kind of eternal engine of astronomical size, cycling matter through subduction zones and volcanic systems, creating and destroying crust. Athanasius was wrong about the mechanism, but right about the structure. The earth is not solid. It is everywhere gaping, hollowed with empty rooms, and it is alive.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

We can’t have nice things… because of AI scrapers

1 Share

In the past few months the MetaBrainz team has been fighting a battle against unscrupulous AI companies ignoring common courtesies (such as robots.txt) and scraping the Internet in order to build up their AI models. Rather than downloading our dataset in one complete download, they insist on loading all of MusicBrainz one page at a time. This of course would take hundreds of years to complete and is utterly pointless. In doing so, they are overloading our servers and preventing legitimate users from accessing our site.

Now the AI scrapers have found ListenBrainz and are hitting a number of our API endpoints for their nefarious data gathering purposes. In order to protect our services from becoming overloaded, we’ve made the following changes:

  • The /metadata/lookup API endpoints (GET and POST versions) now require the caller to send an Authorization token in order for this endpoint to work.
  • The ListenBrainz Labs API endpoints for mbid-mapping, mbid-mapping-release and mbid-mapping-explain have been removed. Those were always intended for debugging purposes and will also soon be replaced with a new endpoints for our upcoming improved mapper.
  • LB Radio will now require users to be logged in to use it (and API endpoint users will need to send the Authorization header). The error message for logged in users is a bit clunky at the moment; we’ll fix this once we’ve finished the work for this year’s Year in Music.

Sorry for these hassles and no-notice changes, but they were required in order to keep our services functioning at an acceptable level.

Read the whole story
mrmarchant
8 hours ago
reply
Share this story
Delete
Next Page of Stories