1343 stories
·
1 follower

choosing learning over autopilot

1 Share

I use ai coding tools a lot. I love them. I’m all-in on ai tools. They unlock doors that let me do things that I cannot do with my human hands alone.

But they also scare me.

As I see it, they offer me two paths:

✨ The glittering vision ✨

The glittering vision is they let me build systems in the way that the version of me who is a better engineer would build them. Experimentation, iteration and communication have become cheaper. This enables me to learn by doing at a speed that was prohibitive before. I can make better decisions about what and how to build because I can try out a version and learn where some of the sharp edges are in practice instead of guessing. I can also quickly loop in others for feedback and context. All of this leads to building a better version of the system than I would have otherwise.

ā˜ ļø The cursed vision ā˜ ļø

The cursed vision is I am lazy, and I build systems of ai slop that I do not understand. There’s a lot of ink spilled about perils and pains of ai slop, especially working on a team that has to maintain the resulting code.

What scares me most is an existential fear that I won’t learn anything if I work in the ā€œlazyā€ way. There is no substitute for experiential learning, and it accumulates over time. There are things that are very hard for me to do today, and I will feel sad if all of those things feel equally hard in a year, two years, five years. I am motivated by an emotional response to problems I find interesting, and I like problems that have to do with computers. I am afraid of drowning that desire by substituting engaging a problem with semi-conscious drifting on autopilot.

And part of why this is scary to me is that even if my goal is to be principled, to learn, to engage, to satisfy my curiosity with understanding, it is really easy for me to coast with an llm and not notice. There are times when I am tired and I am distracted and I have a thing that I need to get done at work. I just want it done, because then I have another thing I need to do. There are a lot of reasons to be lazy.

So I think the crux here is about experiential learning:

  • ai tools make it so much easier to learn by doing, which can lead to much better results
  • but it’s also possible to use them take a shortcut and get away without learning
    • I deeply believe that the shortcut is a trap
    • I also believe it is harder than it seems to notice and be honest about when I’m doing this

And so, I’ve been thinking about guidelines & guardrails– how do I approach my work to escape the curse, such that llms are a tool for understanding, rather than a replacement for thinking?

Here’s my current working model:

  1. use ai-tooling to learn, in loops
  2. ai-generated code is cheap and not precious; throw it away and start over several times
  3. be very opinionated about how to break down a problem
  4. ā€œtextbookā€ commits & PRs
  5. write my final docs / pr descriptions / comments with my human hands

The rest of the blog post is a deeper look at these topics, in a way that I hope is pretty concrete and grounded.

but first, let me make this more concrete

Things I now get to care less about:

  • the mechanics of figuring out how things are hooked together
  • the mechanics of translating pseudocode into code
  • figuring out what the actual code looks like

The times I’m using ai tools to disengage a problem and go fast are the times I’m only doing the things in this first category and getting away with skipping doing the things in the other two.

Things I cared about before and should still care about:

  • deciding which libraries are used
  • how the code is organized: files & function signatures
  • leaving comments that explain why something is set up in a way if there’s complication behind it
  • leaving docs explaining how things work
  • understanding when I need to learn something more thoroughly to get unblocked

Things I now get to care about that were expensive before:

  • more deeply understanding how a system works
  • adding better observability like nicely structured outputs for debugging
  • running more experiments

The times when I’m using ai tools to enhance my learning and understanding I’m doing the things in the latter two categories.

I will caveat that the appropriate amount of care and effort in an implementation depends, of course, on the problem and context. More is not always better. Moving slow can carry engineering risk and I know from experienced that it’s possible for a team to mistake micromanagement for code quality.

I like to work on problems somewhere in the middle of the ā€œhow correct does this have to beā€ spectrum and so that’s where my intuition is tuned to. I don’t need things clean down to the bits, but how the system is built matters so care is worth the investment.

workflow

Here is a workflow I’ve been finding useful for medium-sized problems.

Get into the problem: go fast, be messy, learn and get oriented

  1. Research & document what I want to build
    1. I collab with the ai to dump background context and plans into a markdown file
      1. The doc at this stage can be rough
      2. A format that I’ve been using:
        1. What is the problem we’re solving?
        2. How does it work today?
        3. How will this change be implemented?
  2. Build a prototype
    1. The prototype can be ai slop
    2. Bias towards seeing things run & interacting with them
  3. Throw everything away. Start fresh, clean slate
    1. It’s much faster to build it correctly than to fix it

Formulate a solution: figure out what the correct structure should be

  1. Research & document based on what I know from the prototype
    1. Read code, docs and readmes with my human eyes
    2. Think carefully about the requirements & what causes complication in the code. Are those hard or flexible (or imagined!) requirements?
  2. Design what I want to build, again
  3. Now would be a good time to communicate externally if that’s appropriate for the scope. Write one-pager for anyone who might want to provide input.
  4. Given any feedback, design the solution one more time, and this time polish it. Think carefully & question everything. Now is the time to use my brain.
    1. Important: what are the APIs? How is the code organized?
    2. Important: what libraries already exist that we can use?
    3. Important: what is the iterative implementation order so that the code is modular & easy to review?
  5. Implement a skeleton, see how the code smells and adjust
  6. Use this to compile a final draft of how to implement the feature iteratively
  7. Commit the skeleton + the final implementation document

Implement the solution: generate the final code

  1. Cut a new branch & have the ai tooling implement all the code based on the final spec
  2. If it’s not a lot of code or it’s very modular, review it and commit each logical piece into its own commit / PR
  3. If it is a lot of code, review it, and commit it as a reference implementation
    1. Then, rollback to the skeleton branch, and cut a fresh branch for the first logic piece that will be its own commit / PR
    2. Have the ai implement just that part, possibly guided by any ideas from seeing the full implementation
  4. For each commit, I will review the code & I’ll have the ai review the code
  5. I must write my own commit messages with descriptive trailers

One of the glittering things about ai tooling is that it’s faster than building systems by hand. I maintain that even with these added layers of learning before implementing, it’s still faster than what I could do before while giving me a richer understanding and a better result.

Now let me briefly break out the guidelines I mentioned in the intro and how they relate to this workflow.

learning in loops

There are a lot of ways to learn what to build and how to build it, including:

  • Understanding the system and integrations with surround systems
  • Understanding the problem, the requirements & existing work in the space
  • Understanding relationships between components, intended use-cases and control flows
  • Understanding implementation details, including tradeoffs and what a MVP looks like
  • Understanding how to exercise, observe and interact with the implementation

I’ll understand each area in a different amount of detail at different times. I’m thinking of it as learning ā€œin loopsā€ because I find that ai tooling lets me quickly switch between breadth and depth in an iterative way. I find that I ā€œunderstandā€ the problem and the solution in increasing depth and detail several times before I build it, and that leads to a much better output.

I think there two pitfalls in these learning loops: one feeling like I’m learning when I’m actually only skimming, and the other is getting stuck limited on what the ai summaries can provide. One intuition I’ve been trying to build is when to go read the original sources (like code, docs, readmes) myself. I have two recent experiences top-of-mind informing this:

In the first experience, a coworker and I were debugging a mysterious issue related to some file-related resource exhaustion. We both used ai tools to figure out what cli tools we had to investigate and to build a mental model of how the resource in question was supposed to work. I got stuck after getting output that seemed contradictory, and didn’t fit my mental model. My coworker got to a similar spot and then took a step out of the ai tooling to go read the docs about the resource with their human eyes. That led them to understand that the ai summary wasn’t accurate: it had missed some details that explained the confusing situation we were seeing.

This example really sticks out in my memory. I thought I was being principled rather than lazy by building my mental model of what was supposed to be happening, but I had gotten mired in building that mental model second-hand instead of reading the docs myself.

In the second experience, I was working on a problem related to integrating with a system that had a documented interface. I had the ai read & summarize the interface and then got into the problem in a way similar to the first step of the workflow I described above. I was using that to formulate an idea of what the solution should be. Then I paused to repeat the research loop but with more care: I read the interface with my human eyes– and found the ai summary was wrong! It wasn’t a big deal and I could shift my plans, but I was glad to have learned to pause and take care in validating the details of my mental model.

ai-generated code is throw-away code

I had a coworker describe working with ai coding tools like working on a sculpture. When they asked it to reposition the arm, it would accidentally bump the nose out of alignment.

The way I’m thinking about it now, it’s more like: instead of building a sculpture, I’m asking it to build me a series of sculptures.

The first one is rough-hewn and wonky, but lets me understand the shape of what I’m doing.

The next one or two are just armatures.

The next one might be a mostly functional sculpture on the latest armature; this lets me understand the shape of what I’m doing with much higher precision.

And then finally, I’ll ask for a sculpture, using the vetted armature, except we’ll build it one part at a time. When we’re done with a part, we’ll seal it so we can’t bump it out of alignment.

A year ago, I wasn’t sure if it was better to try to fix an early draft of ai generated code to be better, or to throw it out. Now I feel strongly that ai-generated code is not precious, and not worth the effort to fix it. If you know what the code needs to do and have that clearly documented in detail, it takes no time at all for the ai to flesh out the code. So throw away all the earlier versions, and focus on getting the armature correct.

Making things is all about processes and doing the right thing at the right time. If you throw a bowl and that bowl is off-center, it is a nightmare to try to make it look centered with trimming. If you want a centered bowl then you must throw it on-center. Same here, if you want code that is modular and well structured, the time to do that is before you have the ai implement the logic.

ā€œtextbookā€ commits and PRs

It’s much easier to review code that has been written in a way where a feature is broken up into an iteration of commits and PRs. This was true before ai tooling, and is true now.

The difference is that writing code with my hands was slow and expensive. Sometimes I’d be in the flow and I’d implement things in a way that was hard to untangle after the fact.

I believe that especially if I work in the way I’ve been describing here, ai code is cheap. This makes it much easier/cheaper for me to break apart my work into ways that are easy to commit and review.

My other guilty hesitation before ai tooling was I never liked git merge conflicts and rebasing branches. It was confusing and had the scary potential of losing work. Now, ai tooling is very good at rebasing branches, so it’s much less scary and pretty much no effort.

I also think that small, clean PRs are an external forcing function to working in a way that builds my understanding rather than lets me take shortcuts: if I generate 2.5k lines of ai slop, it will be a nightmare to break that into PRs.

i am very opinionated about how to break down a problem

I’m very opinionated in breaking down problems in two ways:

  • how to structure the implementation (files, functions, libraries)
  • how to implement iteratively to make clean commits and PRs

The only way to achieve small, modular, reviewable PRs is to be very opinionated about what to implement and in what order.

Unless you’re writing a literal prototype that will be thrown away (and you’re confident it will actually be thrown away), the most expensive part about building a system is the engineering effort that will go into maintaining it. It is, therefore, very worth-while to be opinionated about how to structure the code. I find that the ai can do an okay job at throwing code out there, but I can come up with a much better division and structure by using my human brain.

A time I got burned by not thinking about libraries & how to break down a problem was when I was trying to fix noisy errors due to a client chatting with a system that had some network blips. I asked an ai model to add rate limiting to an existing http client, which it did by implementing exponential backoff itself. This isn’t a very good solution, surely we don’t need to do that ourselves. I didn’t think this one through, and was glad a coworker with their brain on caught it in code review.

i write docs & pr descriptions with my human hands

Writing can serve a few distinct purposes: one is communication, and distinct from that, one is as a method to facilitate thinking. The act of writing forces me to organize and refine my thoughts.

This is a clear smell-test for me: I must be able to write documents that explain how and why something is implemented. If I can’t, then that’s a clear sign that I don’t actually understand it; I have skipped writing as a method of thinking.

On the communication side of things, I find that the docs or READMEs that ai tooling generates often capture things that aren’t useful. I often don’t agree with their intuition; I find that if I take the effort to use my brain I produce documents that I believe are more relevant.

This isn’t to say that I don’t use ai tooling to write documents: I’ll often have ai dump information into markdown files as I’m working. I’ll often have ai tooling nicely format things like diagrams or tables. Sometimes I’ll have ai tooling take a pass at a document. I’ll often hand a document to ai tooling and ask it to validate whether everything I wrote is accurate based on the implementation.

But I do believe that if I hold myself to the standard that I write docs, commit messages, etc with my hands, I both produce higher quality documentation and force myself to be honest about understanding what I’m describing.

Conclusion

In conclusion, I find that ai coding tools give me a glittering path to understand better by doing, and using that understanding to build better systems. I also, however, think there is a curse of using these systems in a way that skips the ā€œbuild the understandingā€ part, and that pitfall is subtler than it may seem.

I care deeply about, and think it will be important in the long-run, to leverage these tools for learning and engaging. I’ve outlined the ways I’m thinking about how to do best do this and avoid the curse:

  1. use ai-tooling to learn, in loops
  2. ai-generated code is cheap and not precious; throw it away and start over several times
  3. be very opinionated about how to break down a problem
  4. ā€œtextbookā€ commits & PRs
  5. write my final docs / pr descriptions / comments with my human hands
Read the whole story
mrmarchant
15 minutes ago
reply
Share this story
Delete

The truth behind the 2026 J.P. Morgan Healthcare Conference

1 Share

Note: I am co-hosting an event in SF on Friday, Jan 16th.


In 1654, a Jesuit polymath named Athanasius Kircher published Mundus Subterraneus, a comprehensive geography of the Earth’s interior. It had maps and illustrations and rivers of fire and vast subterranean oceans and air channels connecting every volcano on the planet. He wrote that “the whole Earth is not solid but everywhere gaping, and hollowed with empty rooms and spaces, and hidden burrows.”. Alongside comments like this, Athanasius identified the legendary lost island of Atlantis, pondered where one could find the remains of giants, and detailed the kinds of animals that lived in this lower world, including dragons. The book was based entirely on secondhand accounts, like travelers tales, miners reports, classical texts, so it was as comprehensive as it could’ve possibly been.

But Athanasius had never been underground and neither had anyone else, not really, not in a way that mattered.

Today, I am in San Francisco, the site of the 2026 J.P. Morgan Healthcare Conference, and it feels a lot like Mundus Subterraneus.

There is ostensibly plenty of evidence to believe that the conference exists, that it actually occurs between January 12, 2026 to January 16, 2026 at the Westin St. Francis Hotel, 335 Powell Street, San Francisco, and that it has done so for the last forty-four years, just like everyone has told you. There is a website for it, there are articles about it, there are dozens of AI-generated posts on Linkedin about how excited people were about it. But I have never met anyone who has actually been inside the conference.

I have never been approached by one, or seated next to one, or introduced to one. They do not appear in my life. They do not appear in anyone’s life that I know. I have put my boots on the ground to rectify this, and asked around, first casually and then less casually, “Do you know anyone who has attended the JPM conference?”, and then they nod, and then I refine the question to be, “No, no, like, someone who has actually been in the physical conference space”, then they look at me like I’ve asked if they know anyone who’s been to the moon. They know it happens. They assume someone goes. Not them, because, just like me, ordinary people like them do not go to the moon, but rather exist around the moon, having coffee chats and organizing little parties around it, all while trusting that the moon is being attended to.

The conference has six focuses: AI in Drug Discovery and Development, AI in Diagnostics, AI for Operational Efficiency, AI in Remote and Virtual Healthcare, AI and Regulatory Compliance, and AI Ethics and Data Privacy. There is also a seventh theme over ‘Keynote Discussions’, the three of which are The Future of AI in Precision Medicine, Ethical AI in Healthcare, and Investing in AI for Healthcare. Somehow, every single thematic concept at this conference has converged onto artificial intelligence as the only thing worth seriously discussing.

Isn’t this strange? Surely, you must feel the same thing as me, the inescapable suspicion that the whole show is being put on by an unconscious Chinese Room, its only job to pass over semi-legible symbols over to us with no regards as to what they actually mean. In fact, this pattern is consistent across not only how the conference communicates itself, but also how biopharmaceutical news outlets discuss it.

Each year, Endpoints News and STAT and BioCentury and FiercePharma all publish extensive coverage of the J.P. Morgan Healthcare Conference. I have read the articles they have put out, and none of it feels like it was written by someone who actually was at the event. There is no emotional energy, no personal anecdotes, all of it has been removed, shredded into one homogeneous, smoothie-like texture. The coverage contains phrases like “pipeline updates” and “strategic priorities” and “catalysts expected in the second half.” If the writers of these articles ever approach a human-like tenor, it is in reference to the conference’s “tone”. The tone is “cautiously optimistic.” The tone is “more subdued than expected.” The tone is “mixed.” What does this mean? What is a mixed tone? What is a cautiously optimistic tone? These are not descriptions of a place. They are more accurately descriptions of a sentiment, abstracted from any physical reality, hovering somewhere above the conference like a weather system.

I could write this coverage. I could write it from my horrible apartment in New York City, without attending anything at all. I could say: “The tone at this year’s J.P. Morgan Healthcare Conference was cautiously optimistic, with executives expressing measured enthusiasm about near-term catalysts while acknowledging macroeconomic headwinds.” I made that up in fifteen seconds. Does it sound fake? It shouldn’t, because it sounds exactly like the coverage of a supposedly real thing that has happened every year for the last forty-four years.

Speaking of the astral body I mentioned earlier, there is an interesting historical parallel to draw there. In 1835, the New York Sun published a series of articles claiming that the astronomer Sir John Herschel had discovered life on the moon. Bat-winged humanoids, unicorns, temples made of sentient sapphire, that sort of stuff. The articles were detailed, describing not only these creatures appearance, but also their social behaviors and mating practices. All of these cited Herschel’s observations through a powerful new telescope. The series was a sensation. It was also, obviously, a hoax, the Great Moon Hoax as it came to be known. Importantly, the hoax worked not because the details were plausible, but because they had the energy of genuine reporting: Herschel was a real astronomer, and telescopes were real, and the moon was real, so how could any combination that involved these three be fake?

To clarify: I am not saying the J.P. Morgan Healthcare Conference is a hoax.

What I am saying is that I, nor anybody, can tell the difference between the conference coverage and a very well-executed hoax. Consider that the Great Moon Hoax was walking a very fine tightrope between giving the appearance of seriousness, while also not giving away too many details that’d let the cat out of the bag. Here, the conference rhymes.

For example: photographs. You would think there would be photographs. The (claimed) conference attendees number in the thousands, many of them with smartphones, all of them presumably capable of pointing a camera at a thing and pressing a button. But the photographs are strange, walking that exact snickering line that the New York Sun walked. They are mostly photographs of the outside of the Westin St. Francis, or they are photographs of people standing in front of step-and-repeat banners, or they are photographs of the schedule, displayed on a screen, as if to prove that the schedule exists. But photographs of the inside with the panels, audience, the keynotes in progress; these are rare. And when I do find them, they are shot from angles that reveal nothing, that could be anywhere, that could be a Marriott ballroom in Cleveland.

Is this a conspiracy theory? You can call it that, but I have a very professional online presence, so I personally wouldn’t. In fact, I wouldn’t even say that the J.P. Morgan Healthcare Conference is not real, but rather that it is real, but not actually materially real.

To explain what I mean, we can rely on economist Thomas Schelling to help us out. Sixty-six years ago, Schelling proposed a thought experiment: if you had to meet a stranger in New York City on a specific day, with no way to communicate beforehand, where would you go? The answer, for most people, is Grand Central Station, at noon. Not because Grand Central Station is special. Not because noon is special. But because everyone knows that everyone else knows that Grand Central Station at noon is the obvious choice, and this mutual knowledge of mutual knowledge is enough to spontaneously produce coordination out of nothing. This, Grand Central Station and places just like it, are what’s known as a Schelling point.

Schelling points appear when they are needed, burnt into our genetic code, Pleistocene subroutines running on repeat, left over from when we were small and furry and needed to know, without speaking, where the rest of the troop would be when the leopards came. The J.P. Morgan Healthcare Conference, on the second week of January, every January, Westin St. Francis, San Francisco, is what happened when that ancient coordination instinct was handed an industry too vast and too abstract to organize by any other means. Something deep drives us to gather here, at this time, at this date.

To preempt the obvious questions: I don’t know why this particular location or time or demographic were chosen. I especially don’t know why J.P. Morgan of all groups was chosen to organize the whole thing. All of this simply is.

If you find any of this hard to believe, observe that the whole event is, structurally, a religious pilgrimage, and has all the quirks you may expect of a religious pilgrimage. And I don’t mean that as a metaphor, I mean it literally, in every dimension except the one where someone official admits it, and J.P. Morgan certainly won’t.

Consider the elements. A specific place, a specific time, an annual cycle, a journey undertaken by the faithful, the presence of hierarchy and exclusion, the production of meaning through ritual rather than content. The hajj requires Muslims to circle the Kaaba seven times. The J.P. Morgan Healthcare Conference requires devotees of the biopharmaceutical industry to slither into San Francisco for five days, nearly all of them—in my opinion, all of them—never actually entering the conference itself, but instead orbiting it, circumambulating it, taking coffee chats in its gravitational field. The Kaaba is a cube containing, according to tradition, nothing, an empty room, the holiest empty room in the world. The Westin St. Francis is also, roughly, a cube. I am not saying these are the same thing. I am saying that we have, as a species, a deep and unexamined relationship to cubes.

This is my strongest theory so far. That the J.P. Morgan Healthcare conference isn’t exactly real or unreal, but a mass-coordination social contract that has been unconsciously signed by everyone in this industry, transcending the need for an underlying referent.

My skeptical readers will protest at this, and they would be correct to do so. The story I have written out is clean, but it cannot be fully correct. Thomas Schelling was not so naive as to believe that Schelling points spontaneously generate out of thin air, there is always a reason, a specific, grounded reason, that their concepts become the low-energy metaphysical basins that they are. Grand Central Station is special because of the cultural gravitas it has accumulated through popular media. Noon is special because that is when the sun reaches its zenith. The Kaaba was worshipped because it was not some arbitrary cube; the cube itself was special, that it contained The Black Stone, set into the eastern corner, a relic that predates Islam itself, that some traditions claim fell from heaven.

And there are signs, if you know where to look, that the underlying referent for the Westin St. Francis status being a gathering area is physical. Consider the heat. It is January in San Francisco, usually brisk, yet the interior of the Westin St. Francis maintains a distinct, humid microclimate. Consider the low-frequency vibration in the lobby that ripples the surface of water glasses, but doesn’t seem to register on local, public seismographs. There is something about the building itself that feels distinctly alien. But, upon standing outside the building for long enough, you’ll have the nagging sensation that it is not something about the hotel that feels off, but rather, what lies within, underneath, and around the hotel.

There’s no easy way to sugarcoat this, so I’ll just come out and say it: it is possible that the entirety of California is built on top of one immensely large organism, and the particular spot in which the Westin St. Francis Hotel stands—335 Powell Street, San Francisco, 94102—is located directly above its beating heart. And that this is the primary organizing focal point for both the location and entire reason for the J.P. Morgan Healthcare Conference.

I believe that the hotel maintains dozens of meter-thick polyvinyl chloride plastic tubes that have been threaded down through the basement, through the bedrock, through geological strata, and into the cardiovascular system of something that has been lying beneath the Pacific coast since before the Pacific coast existed. That the hotel is a singular, thirty-two story central line. That, during the week of the conference, hundreds of gallons of drugs flow through these tubes, into the pulsating mass of the being, pouring down arteries the size of canyons across California. The dosing takes five days; hence the length of the conference.

And I do not believe that the drugs being administered here are simply sedatives. They are, in fact, the opposite of sedatives. The drugs are keeping the thing beneath California alive. There is something wrong with the creature, and a select group of attendees at the J.P. Morgan Healthcare Conference have become its primary caretakers.

Why? The answer is obvious: there is nothing good that can come from having an organic creature that spans hundreds of thousands of square miles suddenly die, especially if that same creatures mass makes up a substantial portion of the fifth-largest economy on the planet, larger than India, larger than the United Kingdom, larger than most countries that we think of as significant. Maybe letting the nation slide off into the sea was an option at one point, but not anymore. California produces more than half of the fruits, vegetables, and nuts grown in the United States. California produces the majority of the world’s entertainment. California produces the technology that has restructured human communication. Nobody can afford to let the whole thing collapse.

So, perhaps it was decided that California must survive, at least for as long as possible. Hence Amgen. Hence Genentech. Hence the entire biotech revolution, which we are taught to understand as a triumph of science and entrepreneurship, a story about venture capital and recombinant DNA and the genius of the California business climate. The story is not false, but incomplete. The reason for the revolution was, above all else, because the creature needed medicine, and the old methods of making medicine were no longer adequate, and someone decided that the only way to save the patient was to create an entire industry dedicated to its care.

Why is drug development so expensive? Because the real R&D costs are for the primary patient, the being underneath California, and human applications are an afterthought, a way of recouping investment. Why do so many clinical trials fail? For the same reason; the drugs are not meant for our species. Why is the industry concentrated in San Francisco, San Diego, Boston? Because these are monitoring stations, places where other intravenous lines have been drilled into other organs, other places where the creature surfaces close enough to reach.

Finally, consider the hotel itself. The Westin St. Francis was built in 1904, and, throughout its entire existence, it has never, ever, even once, closed or stopped operating. The 1906 earthquake leveled most of San Francisco, and the Westin St. Francis did not fall. It was damaged, yes, but it did not fall. The 1989 Loma Prieta earthquake killed sixty-three people and collapsed a section of the Bay Bridge. Still, the Westin St. Francis did not fall. It cannot fall, because if it falls, the central line is severed, and if the central line is severed, the creature dies, and if the creature dies, we lose California, and if we lose California, our civilization loses everything that California has been quietly holding together. And so the Westin St. Francis has hosted every single J.P. Morgan Healthcare Conference since 1983, has never missed one, has never even come close to missing one, and will not miss the next one, or the one after that, or any of the ones that follow.

If you think about it, this all makes a lot of sense. It may also seem very unlikely, but unlikely things have been known to happen throughout history. Mundus Subterraneus had a section on the “seeds of metals,” a theory that gold and silver grew underground like plants, sprouting from mineral seeds in the moist, oxygen-poor darkness. This was wrong, but the intuition beneath it was not entirely misguided. We now understand that the Earth’s mantle is a kind of eternal engine of astronomical size, cycling matter through subduction zones and volcanic systems, creating and destroying crust. Athanasius was wrong about the mechanism, but right about the structure. The earth is not solid. It is everywhere gaping, hollowed with empty rooms, and it is alive.

Read the whole story
mrmarchant
16 minutes ago
reply
Share this story
Delete

We can’t have nice things… because of AI scrapers

1 Share

In the past few months the MetaBrainz team has been fighting a battle against unscrupulous AI companies ignoring common courtesies (such as robots.txt) and scraping the Internet in order to build up their AI models. Rather than downloading our dataset in one complete download, they insist on loading all of MusicBrainz one page at a time. This of course would take hundreds of years to complete and is utterly pointless. In doing so, they are overloading our servers and preventing legitimate users from accessing our site.

Now the AI scrapers have found ListenBrainz and are hitting a number of our API endpoints for their nefarious data gathering purposes. In order to protect our services from becoming overloaded, we’ve made the following changes:

  • The /metadata/lookup API endpoints (GET and POST versions) now require the caller to send an Authorization token in order for this endpoint to work.
  • The ListenBrainz Labs API endpoints for mbid-mapping, mbid-mapping-release and mbid-mapping-explain have been removed. Those were always intended for debugging purposes and will also soon be replaced with a new endpoints for our upcoming improved mapper.
  • LB Radio will now require users to be logged in to use it (and API endpoint users will need to send the Authorization header). The error message for logged in users is a bit clunky at the moment; we’ll fix this once we’ve finished the work for this year’s Year in Music.

Sorry for these hassles and no-notice changes, but they were required in order to keep our services functioning at an acceptable level.

Read the whole story
mrmarchant
6 hours ago
reply
Share this story
Delete

As Schools Embrace A.I. Tools, Skeptics Raise Concerns (Natasha Singer)

1 Share

A New York Times journalist, Natasha Singer covers technology access and use. This article appeared January 2, 2026.

In early November, Microsoft said it would supply artificial intelligence tools and training to more than 200,000 students and educators in the United Arab Emirates.

Days later, a financial services company in Kazakhstan announced an agreement with OpenAI to provide ChatGPT Edu, a service for schools and universities, for 165,000 educators in Kazakhstan.

Last month, xAI, Elon Musk’s artificial intelligence company, announced an even bigger project with El Salvador: developing an A.I. tutoring system, using the company’s Grok chatbot, for more than a million students in thousands of schools there.

Fueled partly by American tech companies, governments around the globe are racing to deploy generative A.I. systems and training in schools and universities.

Some U.S. tech leaders say A.I. chatbots — which can generate humanlike emails, create class quizzes, analyze data and produce computer code — can be a boon for learning. The tools, they argue, can save teachers time, customize student learning and help prepare young people for an ā€œA.I.-drivenā€ economy.

But the rapid spread of the new A.I. products could also pose risks to young people’s development and well-being, some children’s and health groups warn.

A recent study from Microsoft and Carnegie Mellon University found that popular A.I. chatbots may diminish critical thinking. A.I. bots can produce authoritative-sounding errors and misinformation, and some teachers are grappling with widespread A.I.-assisted student cheating.

Silicon Valley for years has pushed tech tools like laptops and learning apps into classrooms, with promises of improving education access and revolutionizing learning.

Still, a global effort to expand school computer access — a program known as ā€œOne Laptop per Childā€ — did not improve students’ cognitive skills or academic outcomes, according to studies by professors and economists of hundreds of schools in Peru. Now, as some tech boosters make similar education access and fairness arguments for A.I., children’s agencies like UNICEF are urging caution and calling for more guidance for schools.

ā€œWith One Laptop per Child, the fallouts included wasted expenditure and poor learning outcomes,ā€ Steven Vosloo, a digital policy specialist at UNICEF, wrote in a recent post. ā€œUnguided use of A.I. systems may actively de-skill students and teachers.ā€

Education systems across the globe are increasingly working with tech companies on A.I. tools and training programs.

In the United States, where states and school districts typically decide what to teach, some prominent school systems recently introduced popular chatbots for teaching and learning. In Florida alone, Miami-Dade County Public Schools, the nation’s third-largest school system, rolled out Google’s Gemini chatbot for more than 100,000 high school students. And Broward County Public Schools, the nation’s sixth-biggest school district, introduced Microsoft’s Copilot chatbot for thousands of teachers and staff members.

Outside the United States, Microsoft in June announced a partnership with the Ministry of Education in Thailand to provide free online A.I. skills lessons for hundreds of thousands of students. Several months later, Microsoft said it would also provide A.I. training for 150,000 teachers in Thailand. OpenAI has pledged to make ChatGPT available to teachers in government schools across India.

The Baltic nation of Estonia is trying a different approach, with a broad new national A.I. education initiative called ā€œA.I. Leap.ā€

The program was prompted partly by a recent poll showing that more than 90 percent of the nation’s high schoolers were already using popular chatbots like ChatGPT for schoolwork, leading to worries that some students were beginning to delegate school assignments to A.I.

Estonia then pressed U.S. tech giants to adapt their A.I. to local educational needs and priorities. Researchers at the University of Tartu worked with OpenAI to modify the company’s Estonian-language service for schools so it would respond to students’ queries with questions rather than produce direct answers.

Introduced this school year, the ā€œA.I. Leapā€ program aims to teach educators and students about the uses, limits, biases and risks of A.I. tools. In its pilot phase, teachers in Estonia received training on OpenAI’s ChatGPT and Google’s Gemini chatbots.

ā€œIt’s critical A.I. literacy,ā€ said Ivo Visak, the chief executive of the A.I. Leap Foundation, an Estonian nonprofit that is helping to manage the national education program. ā€œIt’s having a very clear understanding that these tools can be useful — but at the same time these tools can do a lot of harm.ā€

Estonia also recently held a national training day for students in some high schools. Some of those students are now using the bots for tasks like generating questions to help them prepare for school tests, Mr. Visak said.

ā€œIf these companies would put their effort not only in pushing A.I. products, but also doing the products together with the educational systems of the world, then some of these products could be really useful,ā€ Mr. Visak added.

This school year, Iceland started its own national A.I. pilot in schools. Now several hundred teachers across the country are experimenting with Google’s Gemini chatbot or Anthropic’s Claude for tasks like lesson planning, as they aim to find helpful uses and to pinpoint drawbacks.

Researchers at the University of Iceland will then study how educators used the chatbots.

Students won’t use the chatbots for now, partly out of concern that relying on classroom bots could diminish important elements of teaching and learning.

ā€œIf you are using less of your brain power or critical thinking — or whatever makes us more human — it is definitely not what we want,ā€ said Thordis Sigurdardottir, the director of Iceland’s Directorate of Education and School Services.

Tinna Arnardottir and Frida Gylfadottir, two teachers participating in the pilot at a high school outside Reykjavik, say the A.I. tools have helped them create engaging lessons more quickly.

Ms. Arnardottir, a business and entrepreneurship teacher, recently used Claude to make a career exploration game to help her students figure out whether they were more suited to jobs in sales, marketing or management. Ms. Gylfadottir, who teaches English, said she had uploaded some vocabulary lists and then used the chatbot to help create exercises for her students.

ā€œI have fill-in-the-blank word games, matching word games and speed challenge games,ā€ Ms. Gylfadottir said. ā€œSo before they take the exam, I feel like they’re better prepared.ā€

Ms. Gylfadottir added that she was concerned about chatbots producing misinformation, so she vetted the A.I.-created games and lessons for accuracy before asking her students to try them. Ms. Gylfadottir and Ms. Arnardottir said they also worried that some students might already be growing dependent on — or overly trusting of — A.I. tools outside school.

That has made the Icelandic teachers all the more determined, they said, to help students learn to critically assess and use chatbots.

ā€œThey are trusting A.I. blindly,ā€ Ms. Arnardottir said. ā€œThey are maybe losing motivation to do the hard work of learning, but we have to teach them how to learn with A.I.ā€

Teachers currently have few rigorous studies to guide generative A.I. use in schools. Researchers are just beginning to follow the long-term effects of A.I. chatbots on teenagers and schoolchildren.

ā€œLots of institutions are trying A.I.,ā€ said Drew Bent, the education lead at Anthropic. ā€œWe’re at a point now where we need to make sure that these things are backed by outcomes and figure out what’s working and what’s not working.ā€



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Simon Tatham’s Portable Puzzle Collection. “This page contains a collection of small...

1 Share
Simon Tatham’s Portable Puzzle Collection. “This page contains a collection of small computer programs which implement one-player puzzle games.”

šŸ’¬ Join the discussion on kottke.org →

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

Gomatsu didn’t plan to illustrate 80 cats of Taiwan in one day... but he did

1 Share

At a cat supplies fair the feline-loving illustrator took on the mammoth task of drawing nearly 100 cats off-the-cuff.

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories