1260 stories
·
1 follower

Can Jollibee Beat American Fast Food at Its Own Game?

1 Share

The United States exported fast-food culture to the Philippines. Over the decades—and notably since 1998, when the first Jollibee restaurant opened in the San Francisco Bay Area—the Philippines has been serving it back. In this piece, Atlantic staff writer Yasmin Tayag trace’s the Filipino fast-food giant’s global rise: its parent corporation now operates more than 1,800 locations around the world, and has over 10,000 stores under 19 other food and coffee brands. Tayag especially highlights Jollibee’s expansion in North America, the way it adapts its flavors for a “mainstream” audience, and the warmth and joy the brand aims to deliver alongside its tasty menu.

Culinary ambitions ran in the family. Tanmantiong’s father had cooked at a Chinese temple in Manila before opening a restaurant in the southern city of Davao. In 1975, Tan Caktiong borrowed family money to open two Manila franchises of Magnolia, a popular Filipino ice-cream company established by a U.S. volunteer Army cook. With college graduation and a wedding imminent, Tan Caktiong figured that ice cream was as good a way as any to make a living. But before long, he started serving burgers too, bringing on his sister to develop recipes and Tanmantiong to manage operations. He renamed his restaurants Jollibee, which captured the family’s business ethos: Employees should work as hard and harmoniously as bees, but unless they’re happy, that kind of effort is “not worth it,” Tanmantiong said. Jollibee’s burgers were soon outselling the ice cream.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Survey: 91 Percent of College Students Think 'Words Can Be Violence.' That Could Feed Real Violence.

1 Share
A woman holds a protest sign that says "white silence is violence" | Krista Kennell/ZUMA Press/Newscom

Of all the stupid ideas that have emerged in recent years, there may be none worse than the insistence that unwelcome words are the same as violence. This false perception equates physical acts that can injure or kill people with disagreements and insults that might cause hurt feelings and potentially justifies responding to the latter with the former. After all, if words are violence, why not rebut a verbal sparring partner with an actual punch? Unfortunately, the idea is embedded on college campuses where a majority of undergraduate students agree that words and violence can be the same thing.

Most Believe Words Can Be Violence

"Ninety one percent of undergraduate students believe that words can be violence, according to a new poll by the Foundation for Individual Rights and Expression [FIRE] and College Pulse," FIRE announced last week. "The survey's findings are especially startling coming in the wake of Charlie Kirk's assassination—an extreme and tragic example of the sharp difference between words and violence."

The survey posed questions about speech and political violence to undergraduate students at Utah Valley University, where Kirk was murdered, and at colleges elsewhere—2,028 students overall. FIRE and College Pulse compared the student responses to those of members of the general public who were separately polled.

Specifically, one question asked how much "words can be violence" described respondents' thoughts. Twenty-two percent of college undergraduates answered that the sentiment "describes my thoughts completely," 25 percent said it "mostly" described their thoughts, 28 percent put it at "somewhat," and 15 percent answered "slightly." Only 9 percent answered that the "words can be violence" sentiment "does not describe my thoughts at all."

It's difficult to get too worked up about those who "slightly" believe words can be violence, but that still leaves us at 75 percent of the student population. And almost half of students "completely" or "mostly" see words and violence as essentially the same thing. That's a lot of young people who struggle to distinguish between an unwelcome expression and a punch to the nose.

Depressingly, 34 percent of the general public "completely" or "mostly" agree. Fifty-nine percent at least "somewhat" believe words can be violence.

In 2017, when the conflation of words and violence was relatively new, Jonathan Haidt, a New York University psychology professor, worried that the false equivalence fed into the simmering mental health crisis among young people. He and FIRE President Greg Lukianoff wrote in The Atlantic that "growing numbers of college students have become less able to cope with the challenges of campus life, including offensive ideas, insensitive professors, and rude or even racist and sexist peers" and that the rise in mental health issues "is better understood as a crisis of resilience."

Conflating Words and Violence Encourages Violence

Telling young people who haven't been raised to be resilient and to deal with the certainty of encountering debate, disagreement, and rude or hateful expressions in an intellectually and ideologically diverse world plays into problems with anxiety and depression. It teaches that the world is more dangerous than it actually is rather than a place that requires a certain degree of toughness. Worse, if words are violence it implies that responding "in kind" is justified.

"At a time of rapidly rising political polarization in America, it helps a small subset of that generation justify political violence," Haidt and Lukianoff added.

Sure enough, last week Gallup reported that "age is the strongest predictor of attitudes toward political violence, with young adults aged 18–29 more likely than other age groups to say that it is sometimes OK to use violence to achieve a political goal." Thirty percent of respondents 18–29 say it is "sometimes" acceptable to use violence to achieve political goals, compared to 21 percent of those 30–44, 13 percent of those 45–59, and 4 percent of people 60 and older.

And yes, acceptance of political violence has changed over the tears. For those 18 to 29, it was 22 percent in 1970 and 21 percent in 1995. For 30-to-44-year-olds, it was 16 percent in 1970 and 15 percent in 1995. The percentage remained largely unchanged for those 45–59 and dropped for people over 60.

Kirk was assassinated by, allegedly, a 22-year-old who strongly disliked what the conservative activist had to say. The incident is a real-life example of the dangers of conflating speech and violence. It's not acceptable to respond to words you don't like with physical force.

There's a Silver Lining—Sort of

That said, there is encouraging news. The percentage of college undergrads who say it is at least "rarely" acceptable to shout down speakers to prevent them from speaking on campus has dropped to 68 percent from 72 percent last spring. Forty-seven percent of students say it is at least rarely acceptable to block other students from attending a campus speech, down from 54 percent in the spring. And 32 percent find it acceptable to use violence to stop a speech, down from 34 percent.

But sentiments can be overshadowed by high-profile incidents like assassinations. "Because of what happened to Charlie Kirk," 45 percent of students are "less comfortable" expressing their views on controversial political topics during in-class discussions, notes FIRE in its findings. And neither sentiments nor comfort in self-expression have universally shifted.

"Moderate and conservative students across the country became significantly less likely to say that shouting down a speaker, blocking entry to an event, or using violence to stop a campus speech are acceptable actions," writes FIRE. "In contrast, liberal students' support for these tactics held steady, or even increased slightly."

As a consequence, 84 percent of Utah Valley students say the country is headed in the wrong direction when it comes to people's ability to freely express their views. At other colleges and universities, 73 percent feel the same way (almost identical to the feeling of the public at large). It should be noted, however, that the news is more positive when students are asked about their own campuses; 53 percent say their own schools are headed in the right direction.

These students are headed into a world in which many of their peers see little difference, if any, between words and violence. They adhere to this position even after Kirk was murdered for, almost certainly, what he had to say. And they do so in an environment of surging political violence.

Americans worry that the country is becoming less friendly to free expression. But the insistence of too many people that words and violence are the same thing is a big part of the problem.

The post Survey: 91 Percent of College Students Think 'Words Can Be Violence.' That Could Feed Real Violence. appeared first on Reason.com.

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

What Happened at the AEI Debate on AI in Education This Week

1 Share

Hey there - this is where I write on special Wednesdays about edtech and math education. Today that means:

  • Letting you know how the AI + education debate in Washington, D.C. went on Monday a/k/a 🤓 nerd fight night 🥊.

  • Sharing two of the fantastic ideas for teaching slope you all shared in response to last week’s newsletter.


On Monday, four of us—

  1. Shanika Hope, Google

  2. Alex Kotran, AI Education Project

  3. Jake Tawney, Great Hearts Academies

  4. Me

—argued about the motion, “Maximizing School Improvement by 2035 Means Integrating AI into Classrooms Today,” in Washington, DC, at an event hosted by the American Enterprise Institute. We all had three minutes for our opening remarks. Here is what I prepared.


Look—I’m not made of stone. I love the motion and wish it were true, this idea that we made a massive technological breakthrough three years ago and now we can improve schools like never before. I wish it were true. There are a few reasons why I don’t think it is.

The first is the work we have asked schools to do in 2025. We have asked schools to help kids read, write, do math, and be nice to each other all at a time when hormones are flying around the room like pinballs, in a country that has some of the highest child poverty rates of the developed world, at a time when kids are walking to school scared that they or their friends or their loved ones are going to get jumped out on by some dude in a mask dragging them to God knows where. It’s a hard job at a hard time. I tried it last week. I tried to teach slope of a line to eighth graders in East Oakland, wonderful kids in the Fruitvale neighborhood. Ask me how well that went. Kids have a lot more on their mind than math right now.

The second is to understand the tight resources we have given schools to do that job. Financial resources are tight, obviously, but I’m talking also about time. We give schools 180 days to turn 3rd graders into 4th graders. We give schools four days for teachers to get out of class, get together, and learn something new. And perhaps most importantly, administrators have limited reserves of social capital. You can’t go to your school board, your parent community, ten times a year and say, “Okay, we’re going to try something new, something big, everybody get on board.” You can do that once, maybe. Opportunity costs are high. Saying yes to one thing means saying no to ten other things by definition. So that one thing needs to have absolutely bulletproof evidence that it’ll help your school help kids learn to read, write, do math, and be nice.

So my third reason is that in 2025, after three years of trying, the evidence that AI can help schools do that work is extremely weak. I know you have seen the same happy headlines I have or the profiles of schools doing something novel with AI. But scratch even a millimeter beneath the surface of those studies and stories and you’ll find some very weak evidence. You’ll find weak controls, where the control group gets nothing, as it should, and the experimental group gets AI, as it should. But they also get something extra—extra time, extra tutoring, extra rich parents. Elsewhere, you’ll find positive results for AI but they’re in a basket full of point solutions that don’t congeal in any obvious way into a school improvement plan. Still other studies run the other direction, like the MIT study this summer, where kids who used AI to support their essay writing were effectively de-skilled, experiencing diminished cognitive activity and unable to recall the arguments they’d made, compared to students who didn’t use AI.

For those reasons—hard job, limited resources, high opportunity costs, weak evidence in favor of AI, my recommendation is we wait. We take a beat. We take the next bus. We wait for stronger evidence and then move. For now, if you want to maximize school improvement, it’s an easy call to wait rather than integrate AI.


The four debaters.

My comrade Jake Tawney argued that these tools would inhibit the formation of novice learners and novice teachers alike. In their arguments for the motion, Alex Kotran and Shanika Hope argued that:

  • Kids will need AI skills for full US economic participation

  • AI is being used outside of school, often poorly, and schools need to help kids learn to use it well

  • AI will enable teachers to do better work than ever before, planning, differentiating, personalizing, preventing burnout, etc.

The debate was decided by a vote from the audience before and after the debate. The winner was whichever side most increased their share of the vote.

A bar chart showing that the group voting "against" the motion doubled their vote share while the "for" group stayed constant.

We won, which was fun of course, and the conversation kept everyone dancing, trying to stay on message, trying to answer the question we wished we’d been asked without seeming too obvious about it.

Jake and I counter-argued that many of the claims about the value AI offers teachers or students were dreams about the future rather than descriptions of the present. It is also a particularly unfortunate time to suggest that “we know what kids will need for economic participation in ten years,” given we are in the weakest labor market that software development has ever seen, ten years after telling every 12 year-old they needed to learn to code. That effort has gifted the private sector an oversupply of software developers who now have double the unemployment rate of art history majors.

If I had to guess what persuaded the audience to join us against the motion, it was our focus on “maximizing school improvement today.” In my closing argument, I said there were a bunch of motions I thought we’d all agree on, including the need for responsible experimentation, the need for dreaming expansively about what technology might offer kids. But maximizing school improvement today means taking seriously what a school most wants to improve, which is a kid’s ability to read, write, do math, and be nice, and applying the research we know works today. I suspect that approach gave some audience members permission to think AI is quite neat but also that 2025 is not the year that AI comes off the bench to maximize school improvement.

Thanks for reading! Throw your email into the box to get a newsletter about math education and edtech on special Wednesdays! -Dan 👇

Mathematical Mailbag

I expressed difficulty last week keeping students thinking on a conceptual level about slope when the operations (“subtract, subtract, divide”) are so close at hand. Here are a couple of your very interesting responses.

A lego staircase.

Linda, offering an anchor activity that students can refer back to time and again as they try to make sense of new, related ideas:

1. I asked [students] if they realized that if you build stairs in your house that are too steep, it would be “illegal” because it could be dangerous. And that less steep stairs can be inefficient because they might take up too much space in a house.

2. I then gave each pair of students 12 identical Lego bricks.

3. I asked them to make 3 stair cases, using 4 bricks each, of different steepness.

4. After checking their stairs, I had them lay the 3 cases sideways on paper and had them trace it.

5. I then had them determine and record the rise and run of each of the stair cases. I checked those too.

6. Finally, they were to connect the top to the bottom and draw that line with a ruler. At that point, I asked them to record, “What do you notice about the lines you just drew and the slopes of the stairs?” We had conversations including the fact that some people’s stairs sloped downward to the right and others up. So, we also made that connection to positive and negative slope. Some noticed that larger slopes signified steeper stairs (and lines). In following days, when students were working on slope we would refer back to our “Lego stairs” to conceptualize what was going on.

Michael Hayashida, via email, illustrating how to help students understand the need for the measure known as slope:

Here’s something that seems to work in my freshman (somewhat remedial) math classes. All the kids have portable whiteboards.

“Draw me a set of axes and then draw a line that’s really steep.”

“Hold up the whiteboards - anyone have a line with the same steepness as someone else?”

“Ok, draw me a line that’s only medium steep.” They compare again.

“Ok, for mathematicians, this is a problem. They want precision - so that everyone draws a line with the same steepness when I give the instruction. To do that, you need to be able to put a number on steepness. Like... if I call for a line with a steepness of 5, then everyone draws a line with the same steepness. For the next minute, talk to your partner and see if you can come up with a way to measure steepness with a number.”

We share out. Sometimes kids will come up with an angle measurement method - kinda cool.

“Ok, this is actually a solved problem - mathematicians have already agreed on a way to measure steepness. Check this out - everyone put your marker on a lattice point - doesn’t matter which one. Now go over 1 and up 2 and put a point there. Do it again. Do it again. Connect the points. Hold them up - does everyone have lines with the same steepness? Ok, mathematicians call this a steepness of 2.

<Do the same thing, but with a slope of 0.5. Then 2.5.>

“This is how mathematicians decided to measure steepness - it’s what the y is changing by as x goes up by 1. They use a different word than steepness though - they call it slope”.



Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

Size of Life, a visual comparison of living things from DNA to...

1 Share
Size of Life, a visual comparison of living things from DNA to a quaking aspen clone. Lovely illustrations.

💬 Join the discussion on kottke.org

Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

How plant-eaters snag their essential amino acids

1 Share

Early in evolution, we animals lost the ability to manufacture nine of the 20 building blocks needed to make proteins. Herbivores evolved an impressive array of tricks to ensure their dietary needs are met.



Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete

Useful patterns for building HTML tools

1 Share

I've started using the term HTML tools to refer to HTML applications that I've been building which combine HTML, JavaScript, and CSS in a single file and use them to provide useful functionality. I have built over 150 of these in the past year, almost all of them written by LLMs. This article presents a collection of useful patterns I've discovered along the way.

First, some examples to show the kind of thing I'm talking about:

  • svg-render renders SVG code to downloadable JPEGs or PNGs
  • pypi-changelog lets you generate (and copy to clipboard) diffs between different PyPI package releases.
  • bluesky-thread provides a nested view of a discussion thread on Bluesky.

These are some of my recent favorites. I have dozens more like this that I use on a regular basis.

You can explore my collection on tools.simonwillison.net - the by month view is useful for browsing the entire collection.

If you want to see the code and prompts, almost all of the examples in this post include a link in their footer to "view source" on GitHub. The GitHub commits usually contain either the prompt itself or a link to the transcript used to create the tool.

The anatomy of an HTML tool

These are the characteristics I have found to be most productive in building tools of this nature:

  1. A single file: inline JavaScript and CSS in a single HTML file means the least hassle in hosting or distributing them, and crucially means you can copy and paste them out of an LLM response.
  2. Avoid React, or anything with a build step. The problem with React is that JSX requires a build step, which makes everything massively less convenient. I prompt "no react" and skip that whole rabbit hole entirely.
  3. Load dependencies from a CDN. The fewer dependencies the better, but if there's a well known library that helps solve a problem I'm happy to load it from CDNjs or jsdelivr or similar.
  4. Keep them small. A few hundred lines means the maintainability of the code doesn't matter too much: any good LLM can read them and understand what they're doing, and rewriting them from scratch with help from an LLM takes just a few minutes.

The end result is a few hundred lines of code that can be cleanly copied and pasted into a GitHub repository.

Prototype with Artifacts or Canvas

The easiest way to build one of these tools is to start in ChatGPT or Claude or Gemini. All three have features where they can write a simple HTML+JavaScript application and show it to you directly.

Claude calls this "Artifacts", ChatGPT and Gemini both call it "Canvas". Claude has the feature enabled by default, ChatGPT and Gemini may require you to toggle it on in their "tools" menus.

Try this prompt in Gemini or ChatGPT:

Build a canvas that lets me paste in JSON and converts it to YAML. No React.

Or this prompt in Claude:

Build an artifact that lets me paste in JSON and converts it to YAML. No React.

I always add "No React" to these prompts, because otherwise they tend to build with React, resulting in a file that is harder to copy and paste out of the LLM and use elsewhere. I find that attempts which use React take longer to display (since they need to run a build step) and are more likely to contain crashing bugs for some reason, especially in ChatGPT.

All three tools have "share" links that provide a URL to the finished application. Examples:

Switch to a coding agent for more complex projects

Coding agents such as Claude Code and Codex CLI have the advantage that they can test the code themselves while they work on it using tools like Playwright. I often upgrade to one of those when I'm working on something more complicated, like my Bluesky thread viewer tool shown above.

I also frequently use asynchronous coding agents like Claude Code for web to make changes to existing tools. I shared a video about that in Building a tool to copy-paste share terminal sessions using Claude Code for web.

Claude Code for web and Codex Cloud run directly against my simonw/tools repo, which means they can publish or upgrade tools via Pull Requests (here are dozens of examples) without me needing to copy and paste anything myself.

Load dependencies from CDNs

Any time I use an additional JavaScript library as part of my tool I like to load it from a CDN.

The three major LLM platforms support specific CDNs as part of their Artifacts or Canvas features, so often if you tell them "Use PDF.js" or similar they'll be able to compose a URL to a CDN that's on their allow-list.

Sometimes you'll need to go and look up the URL on cdnjs or jsDelivr and paste it into the chat.

CDNs like these have been around for long enough that I've grown to trust them, especially for URLs that include the package version.

The alternative to CDNs is to use npm and have a build step for your projects. I find this reduces my productivity at hacking on individual tools and makes it harder to self-host them.

Host them somewhere else

I don't like leaving my HTML tools hosted by the LLM platforms themselves for a couple of reasons. First, LLM platforms tend to run the tools inside a tight sandbox with a lot of restrictions. They're often unable to load data or images from external URLs, and sometimes even features like linking out to other sites are disabled.

The end-user experience often isn't great either. They show warning messages to new users, often take additional time to load and delight in showing promotions for the platform that was used to create the tool.

They're also not as reliable as other forms of static hosting. If ChatGPT or Claude are having an outage I'd like to still be able to access the tools I've created in the past.

Being able to easily self-host is the main reason I like insisting on "no React" and using CDNs for dependencies - the absence of a build step makes hosting tools elsewhere a simple case of copying and pasting them out to some other provider.

My preferred provider here is GitHub Pages because I can paste a block of HTML into a file on github.com and have it hosted on a permanent URL a few seconds later. Most of my tools end up in my simonw/tools repository which is configured to serve static files at tools.simonwillison.net.

Take advantage of copy and paste

One of the most useful input/output mechanisms for HTML tools comes in the form of copy and paste.

I frequently build tools that accept pasted content, transform it in some way and let the user copy it back to their clipboard to paste somewhere else.

Copy and paste on mobile phones is fiddly, so I frequently include "Copy to clipboard" buttons that populate the clipboard with a single touch.

Most operating system clipboards can carry multiple formats of the same copied data. That's why you can paste content from a word processor in a way that preserves formatting, but if you paste the same thing into a text editor you'll get the content with formatting stripped.

These rich copy operations are available in JavaScript paste events as well, which opens up all sorts of opportunities for HTML tools.

  • hacker-news-thread-export lets you paste in a URL to a Hacker News thread and gives you a copyable condensed version of the entire thread, suitable for pasting into an LLM to get a useful summary.
  • paste-rich-text lets you copy from a page and paste to get the HTML - particularly useful on mobile where view-source isn't available.
  • alt-text-extractor lets you paste in images and then copy out their alt text.

Build debugging tools

The key to building interesting HTML tools is understanding what's possible. Building custom debugging tools is a great way to explore these options.

clipboard-viewer is one of my most useful. You can paste anything into it (text, rich text, images, files) and it will loop through and show you every type of paste data that's available on the clipboard.

Clipboard Format Viewer. Paste anywhere on the page (Ctrl+V or Cmd+V). This shows text/rtf with a bunch of weird code, text/plain with some pasted HTML diff and a Clipboard Event Information panel that says Event type: paste, Formats available: text/rtf, text/plain, 0 files reported and 2 clipboard items reported.

This was key to building many of my other tools, because it showed me the invisible data that I could use to bootstrap other interesting pieces of functionality.

More debugging examples:

  • keyboard-debug shows the keys (and KeyCode values) currently being held down.
  • cors-fetch reveals if a URL can be accessed via CORS.
  • exif displays EXIF data for a selected photo.

Persist state in the URL

HTML tools may not have access to server-side databases for storage but it turns out you can store a lot of state directly in the URL.

I like this for tools I may want to bookmark or share with other people.

  • icon-editor is a custom 24x24 icon editor I built to help hack on icons for the GitHub Universe badge. It persists your in-progress icon design in the URL so you can easily bookmark and share it.

Use localStorage for secrets or larger state

The localStorage browser API lets HTML tools store data persistently on the user's device, without exposing that data to the server.

I use this for larger pieces of state that don't fit comfortably in a URL, or for secrets like API keys which I really don't want anywhere near my server - even static hosts might have server logs that are outside of my influence.

  • word-counter is a simple tool I built to help me write to specific word counts, for things like conference abstract submissions. It uses localStorage to save as you type, so your work isn't lost if you accidentally close the tab.
  • render-markdown uses the same trick - I sometimes use this one to craft blog posts and I don't want to lose them.
  • haiku is one of a number of LLM demos I've built that request an API key from the user (via the prompt() function) and then store that in localStorage. This one uses Claude Haiku to write haikus about what it can see through the user's webcam.

Collect CORS-enabled APIs

CORS stands for Cross-origin resource sharing. It's a relatively low-level detail which controls if JavaScript running on one site is able to fetch data from APIs hosted on other domains.

APIs that provide open CORS headers are a goldmine for HTML tools. It's worth building a collection of these over time.

Here are some I like:

  • iNaturalist for fetching sightings of animals, including URLs to photos
  • PyPI for fetching details of Python packages
  • GitHub because anything in a public repository in GitHub has a CORS-enabled anonymous API for fetching that content from the raw.githubusercontent.com domain, which is behind a caching CDN so you don't need to worry too much about rate limits or feel guilty about adding load to their infrastructure.
  • Bluesky for all sorts of operations
  • Mastodon has generous CORS policies too, as used by applications like phanpy.social

GitHub Gists are a personal favorite here, because they let you build apps that can persist state to a permanent Gist through making a cross-origin API call.

  • species-observation-map uses iNaturalist to show a map of recent sightings of a particular species.
  • zip-wheel-explorer fetches a .whl file for a Python package from PyPI, unzips it (in browser memory) and lets you navigate the files.
  • github-issue-to-markdown fetches issue details and comments from the GitHub API (including expanding any permanent code links) and turns them into copyable Markdown.
  • terminal-to-html can optionally save the user's converted terminal session to a Gist.
  • bluesky-quote-finder displays quotes of a specified Bluesky post, which can then be sorted by likes or by time.

LLMs can be called directly via CORS

All three of OpenAI, Anthropic and Gemini offer JSON APIs that can be accessed via CORS directly from HTML tools.

Unfortunately you still need an API key, and if you bake that key into your visible HTML anyone can steal it and use to rack up charges on your account.

I use the localStorage secrets pattern to store API keys for these services. This sucks from a user experience perspective - telling users to go and create an API key and paste it into a tool is a lot of friction - but it does work.

Some examples:

Don't be afraid of opening files

You don't need to upload a file to a server in order to make use of the <input type="file"> element. JavaScript can access the content of that file directly, which opens up a wealth of opportunities for useful functionality.

Some examples:

  • ocr is the first tool I built for my collection, described in Running OCR against PDFs and images directly in your browser. It uses PDF.js and Tesseract.js to allow users to open a PDF in their browser which it then converts to an image-per-page and runs through OCR.
  • social-media-cropper lets you open (or paste in) an existing image and then crop it to common dimensions needed for different social media platforms - 2:1 for Twitter and LinkedIn, 1.4:1 for Substack etc.
  • ffmpeg-crop lets you open and preview a video file in your browser, drag a crop box within it and then copy out the ffmpeg command needed to produce a cropped copy on your own machine.

You can offer downloadable files too

An HTML tool can generate a file for download without needing help from a server.

The JavaScript library ecosystem has a huge range of packages for generating files in all kinds of useful formats.

Pyodide can run Python code in the browser

Pyodide is a distribution of Python that's compiled to WebAssembly and designed to run directly in browsers. It's an engineering marvel and one of the most underrated corners of the Python world.

It also cleanly loads from a CDN, which means there's no reason not to use it in HTML tools!

Even better, the Pyodide project includes micropip - a mechanism that can load extra pure-Python packages from PyPI via CORS.

WebAssembly opens more possibilities

Pyodide is possible thanks to WebAssembly. WebAssembly means that a vast collection of software originally written in other languages can now be loaded in HTML tools as well.

Squoosh.app was the first example I saw that convinced me of the power of this pattern - it makes several best-in-class image compression libraries available directly in the browser.

I've used WebAssembly for a few of my own tools:

Remix your previous tools

The biggest advantage of having a single public collection of 100+ tools is that it's easy for my LLM assistants to recombine them in interesting ways.

Sometimes I'll copy and paste a previous tool into the context, but when I'm working with a coding agent I can reference them by name - or tell the agent to search for relevant examples before it starts work.

The source code of any working tool doubles as clear documentation of how something can be done, including patterns for using editing libraries. An LLM with one or two existing tools in their context is much more likely to produce working code.

I built pypi-changelog by telling Claude Code:

Look at the pypi package explorer tool

And then, after it had found and read the source code for zip-wheel-explorer:

Build a new tool pypi-changelog.html which uses the PyPI API to get the wheel URLs of all available versions of a package, then it displays them in a list where each pair has a "Show changes" clickable in between them - clicking on that fetches the full contents of the wheels and displays a nicely rendered diff representing the difference between the two, as close to a standard diff format as you can get with JS libraries from CDNs, and when that is displayed there is a "Copy" button which copies that diff to the clipboard

Here's the full transcript.

See Running OCR against PDFs and images directly in your browser for another detailed example of remixing tools to create something new.

Record the prompt and transcript

I like keeping (and publishing) records of everything I do with LLMs, to help me grow my skills at using them over time.

For HTML tools I built by chatting with an LLM platform directly I use the "share" feature for those platforms.

For Claude Code or Codex CLI or other coding agents I copy and paste the full transcript from the terminal into my terminal-to-html tool and share that using a Gist.

In either case I include links to those transcripts in the commit message when I save the finished tool to my repository. You can see those in my tools.simonwillison.net colophon.

Go forth and build

I've had so much fun exploring the capabilities of LLMs in this way over the past year and a half, and building tools in this way has been invaluable in helping me understand both the potential for building tools with HTML and the capabilities of the LLMs that I'm building them with.

If you're interested in starting your own collection I highly recommend it! All you need to get started is a free GitHub repository with GitHub Pages enabled (Settings -> Pages -> Source -> Deploy from a branch -> main) and you can start copying in .html pages generated in whatever manner you like.

Bonus transcript: Here's how I used Claude Code and shot-scraper to add the screenshots to this post.

Tags: definitions, github, html, javascript, projects, tools, ai, webassembly, generative-ai, llms, ai-assisted-programming, vibe-coding, coding-agents, claude-code

Read the whole story
mrmarchant
3 hours ago
reply
Share this story
Delete
Next Page of Stories