1617 stories
·
2 followers

I Found A Terminal Tool That Makes CSV Files Look Stunning

1 Share

You can totally read CSV files in the terminal. After all, it's a text file. You can use cat and then parse it with the column command.

Displaying csv file in table format with cat and column commands
Usual way: Displaying csv file in tabular format with cat and column commands

That works. No doubt. But it is hard to scan and certainly not easy to follow.

I came across a tool that made CSV files look surprisingly beautiful in the terminal.

Default view of a CSV file with Tennis
New way: Beautiful colors, table headers and borders

That looks gorgeous, isn't it? That is the magic of Tennis. No, not the sport, but a terminal tool I recently discovered.

Meet Tennis: CSV file viewing for terminal junkies

Okay... cheesy heading but clearly these kinds of tools are more suitable for people who spend considerable time in the terminal. Normal people would just use an office tool or simple text editor for viewing CSV file.

But a terminal dweller would prefer something that doesn't force him to come out of the terminal.

Tennis does that. Written in Zig, displays the CSV files gorgeously in a tabular way, with options for a lot of customization and stylization.

Screenshot shared on Tennis GitHub repo
Screenshot shared on Tennis GitHub repo

You don't necessarily need to customize it, as it automatically picks nice colors to match the terminal. As you can see, clean, solid borders and playful colors are visible right upfront.

📋
As you can see in the GitHub repo of Tennis, Claude is mentioned as a contributor. Clearly, the developer has used AI assistance in creating this tool.

Things you can do with Tennis

Let me show you various styling options available in this tool.

Row numbering

You can enable the numbering of rows on Tennis using a simple -n flag at the end of the command:

tennis samplecsv.csv -n
Numbered Tennis CSV file

This can be useful when dealing with larger files, or files where the order becomes relevant.

Adding a title

You can add a title to the printed CSV file on the terminal, with a -t argument, followed by a string that is the title itself:

tennis samplecsv.csv -t "Personal List of Historically Significant Songs"
CSV file with added title

The title is displayed in an extra row on top. Simple enough.

Table width

You can set a maximum width to the entire table (useful if you want the CSV file not to occupy the entire width of the window). To do so, use the -w tag, followed by an integer that will display the maximum number of characters that you want the table to occupy.

tennis samplecsv.csv -w 60
Displaying a CSV file with a maximum table width

As you can see, compared to the previous images, this table has shrunk much more. The width of the table is now 60 characters, no more.

Changing the delimiter

The default character that separates values in a CSV file is (obviously) a comma. But sometimes that isn't the case with your file, and it could be another character like a semicolon or a $, it could pretty much be anything as long as the number of columns is the same for every row present. To print a CSV file with a "+" for a delimiter instead, the command would be:

tennis samplecsv.csv -d +
Tennis for CSV file for a different delimiter

As you can see, the change of the delimiter can be well specified and incorporated into the command.

Color modes

By default, as mentioned in the GitHub page, Tennis likes to be colorful. But you can change that, depending on the --color flag. It can be on, off or auto (which mostly means on).

tennis samplecsv.csv --color off
Tennis print with colors off

Here's what it looks like with the colors turned off.

Digits after decimal

Sometimes CSV files involve numbers that are long floats, being high precision with a lot of digits after a decimal point. While printing it out, if you don't wish to see all of them, but only to a certain extent, you use the --digits flag:

tennis samplecsv.csv --digits 3
CSV file with number of digits after decimal limited

As you can see on the CSV file printed with cat, the rating numbers have a lot of digits after the decimal points, all more than 3. But specifying the numbers caused Tennis to shorten it down.

Themes

Tennis usually picks the theme from the colors being used in the terminal to gauge if it is a dark or a light theme, but you can change that manually with the --theme flag. Since I have already been using the dark theme, let's see what the light theme looks like:

Tennis light theme

Doesn't look like much at all in a terminal with the dark theme, which means it is indeed working! The accepted values are dark, light and auto (which again, gauges the theme based on your terminal colors).

Vanilla mode

In the vanilla mode, any sort of numerical formatting is abolished entirely from the printing of the CSV file. As you can see in the images above, rather annoyingly, the year appears with a comma after the first number because the CSV file is wrongly assuming that that is a common sort of number and not a year. But if I do it with the --vanilla flag:

tennis samplecsv.csv --vanilla
Tennis usage with numerical formatting off

The numerical formatting of the last row is turned off. This will work similarly with any other sort of numbers you might have in your CSV file.

Quick commands (you are more likley to use)

Here's the most frequently used options I found with Tennis:

tennis file.csv # basic view
tennis file.csv -n # row numbers
tennis file.csv -t "Title"
tennis file.csv -w 60
tennis file.csv --color off

I tried it on a large file

To check how Tennis handles larger files, I tried it on a CSV file with 10,000 rows. There was no stutter or long gap to process the command, which will obviously vary from system to system, but it doesn't seem like there is much of a hiccup in the way of its effectiveness even for larger files.

That's just my experience. You are free to explore on your system.

Not everything worked as expected

🚧
Not all the features listed on the GitHub page work.

While Tennis looks impressive, not everything works as advertised yet.

Some features listed on GitHub simply didn’t work in my testing, even after trying multiple installation methods.

For example, there is a --peek flag, which is supposed to give an overview of the entire file, with the size, shape and other stats. A --zebra flag is supposed to give it an extra layer of alternated themed coloring. There are --reverse and --shuffle flags to change the order of rows, and --head and --tail flags to print the only first few or last few rows respectively. There are still more, but again, unfortunately, they do not work.

Getting started with Tennis

Tennis can be installed in three different ways, one is to build from source (obviously), second to download the executable and place it in one of the directories in your PATH (which is the easiest one), and lastly using the brew command (which can indeed be easier if you have homebrew installed on your system).

The instructions for all are listed here. I suggest getting the tar.gz file from the release page, extracting it and then using the provided executable in the extracted folder.

There is no Flatpak or Snap or other packages available for now.

Final thoughts

While the features listed in the help page work really well, all the features listed on the website do not, and that discrepancy is a little disappointing, but something that we hope gets fixed in the future.

So altogether, it is a good tool for printing your CSV files in an engaging way, to make them more pleasing to look at.

While a terminal lover find such tools attractive, it could also be helpful in cases where you are reviewing exported data from a script or you have to deal with csv files on servers.

If you try Tennis, don't forget to share the experience in the comment section.



Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

AI has limits, even if many AI people can't see them

1 Share

Towards the end of his new book, The Irrational Decision, Ben Recht explains what he has set out to do.

Most books on technology either take the side that all technology is bad, or all technology is good. This isn’t one of those books. Such books focus too much on harms and not enough on limits. Limits are more empowering. Throughout the book, I’ve maintained that mathematical rationality is limited in what kinds of problems it is best placed to solve but has sweet spots that have yielded remarkable technological advances.

It may be that more books on technology escape the good-bad dichotomy than Ben allows. Even so, I haven’t read another book that is nearly as useful in explaining why and where the broad family of approaches that we (perhaps unfortunately) call AI work, and why and where they don’t. Ben (who is a mate) combines a deep understanding of the technologies with a grasp of the history and ability to write clearly and well about complicated things. I learned a lot from this book. Very likely, you will too.

Subscribe now

The good-bad dichotomy that Ben describes does indeed shape a whole lot of our current debate around “mathematical rationality” and AI. Regarding the first, Nate Silver’s book On The Edge argues for the kinds of Bayesian rationality that Silicon Valley people like to talk about. It praises the “River” of people who think about the world in terms of statistical probabilities, which you update whenever new information becomes available. As Ben suggests in a separate review essay with Leif Weatherby, the “River” wraps professional poker, rationalist thinking about AI, sports betting and crypto bro philosophizing together into a single package that appears sort-of-coherent, and even perhaps brilliant, if you don’t look at it too closely. As Ben suggests, rationalists of this persuasion tend to assume that “computers can make better decisions than humans,” and are often fervent cheerleaders for AI (Silver, in fairness to him, isn’t nearly as fervent as some others). Other books, like Emily Bender and Alex Hanna’s The AI Con, begin from just the opposite assumption: that most of what we call “AI” is hype. Bender and Hanna tell us that if we start poking around behind the grand spectacle and booming voice of “mathy math,” we will find the rather unimpressive wizard of machine learning, who is actually only capable of fancy spell-check, telling radiologists which parts of an image they might want to take a look at, and other such “well scoped” activities.

Neither AI Rationalism or AI-Con Thought is all that helpful in explaining the technologies we confront right now. The former tends to launch into fantasy, repeatedly demonstrating how starting from ridiculous premises allows you to reason your way to ridiculous results. The latter tends to curdle into denialism, claiming ever more loudly that disliked technologies are useless even as they find ever more uses. We ought to be much more worried about the claims of the triumphalists than the denialists, since they are far more influential. But to successfully deflate their claims, we need a more grounded perspective on what AI and related technologies are capable of than can be provided by the denialists.

The Irrational Decision provides strong reasons for skepticism about the grander aspirations of the rationalist project, while explaining why machine learning has remarkable uses in its appropriate domain. Those who are embroiled most closely with the rationalist project have a hard time understanding its limits because those limits shape their own world view. The one weird trick of rationalism is to recompose complex problems in terms that can readily be rationalized. When that is good, it is very, very good, but when it is bad, it is horrid. To understand this, it’s first necessary to understand where rationalism comes from.

*******

Much of the discussion of The Irrational Decision is historical. It reaches back to the 1940s and 1950s to figure out where rationalism actually comes from, providing a short history that is a little like what Erickson et al’s How Reason Almost Lost Its Mind might have been if it focused more on statistics and operations research than economics. Ben’s aim in all of this is to identify how ‘mathematical rationality’ came to be a relatively coherent set of ideas about how we might better organize society.

The story he tells is necessarily messy, but some important broad themes emerge, most importantly around the development of optimization theory. Linear programming makes it possible to find optimal ways to allocate resources within a limited budget so long as the constraints are linear (when they are not, all computational hell can break loose). Optimal control theory allows a control system to adjust optimally to its environments (again, under restrictive assumptions about the constraints). Game theory can postulate - and often even discover - optimal strategies to play against opponents in strategic situations. These toolkits overlap with others. A family of techniques, ranging from simulated annealing to the ancestral forms of the gradient descent/backpropagation that “deep learning” relies on, provides ways to discover superior local optima in more complex situations. Randomized clinical trials (RCTs) provided possible ways to discover whether a given intervention (a drug; a policy measure) worked or not.

All of these approaches suggest the superiority of technical forms of analysis over human judgments. RCTs apply protocols and statistical analysis to try to discover causal relationships (according to the standard story), or justify interventions (according to Ben’s). Other approaches involve the discovery of optimal solutions, given convenient mathematical assumptions and simplifications. Others still involve the discovery of local optima (that is: solutions that are better than others that are readily visible in their neighborhood), which may be better than those that ordinary humans could reach.

Rationalist approaches are very powerful in their domains of proper application, but you need some sense of what those domains are. Ben suggests that there is a “sweet spot” for many or most computational tools. For example, statistics is not useful for situations where a treatment always works (why would you need complicated tools of inference), or where outcomes are too variable and unpredictable, but for the messy zone between the two. When you hit the space where your tools have traction on reality despite their imperfections, you can accomplish extraordinary things. For example, in his own review of the book, Dan Davies talks about

the incredibly productive feedback loop between “optimisation algorithms are really demanding in terms of computer processing” and “optimisation algorithms are really useful for designing better and faster computers”.

As Ben describes it, designers were able to reduce the incredibly complex challenges of chip design into an optimizable task through making simplifying assumptions, about “standard cells” and combining them with simulated annealing algorithms that could discover optima that would otherwise not be easily visible. This, then, as per Dan, allowed faster chips to be developed, which in turn could run more powerful algorithms, and so on, in a loop.

But treating rationalism as a universal tool of discovery is problematic, especially given that these techniques are characteristically limited or start from implausible simplifying assumptions. Daphne Koller, one of the researchers who Ben describes, discovered some startlingly effective ways to reduce the complexity of poker so that it became more nearly “solvable.” But Koller eventually abandoned the study of game theory:

“Understanding the world around us is more important than understanding the optimal way to bluff,” she told me. In her experience, when she needed to model people in simulations of complex systems, modeling their decisions as random got her 90 percent of the way to a solution. How to best make decisions under wide-ranging uncertainty was far less cut-and-dried. For Koller, once you stepped away from the game board and had to make decisions in reality, understanding uncertainty and the myriad ways it could arise and impact plans was more important than strategy.

As it turned out, poker algorithms too generated feedback loops, not through simplifying chip design, but simplifying human beings (C. Thi Nguyen’s book, The Score provides a broader account of how this works). There is an important sense in which optimal poker theory was less successful in optimizing poker than in optimizing poker players, inspiring a style of play in which professionals “started memorizing expected value tables from poker solvers so that they could play ‘game theory optimal’ in big poker tournaments.” Perhaps that can be described as an improvement in human affairs. I’m not seeing it myself.

*******

Understanding mathematical rationalism helps us understand the strengths and limitations of AI. It isn’t just a form of rationalism, but the combined application of a variety of long established rationalist techniques - neural nets (which go back to the 1950s), statistical learning and backpropagation, made possible by more powerful computers and enormous amounts of readily available data. Claude Shannon’s methodology for modeling language, which is the intellectual basis of “large language models” is “an instance of statistical pattern recognition” or machine learning. And machine learning itself is no more and no less than a powerful statistical tool. I found this passage maybe the most clarifying explanation of what it does that I’ve ever read.

To frame the prototypical machine learning problem, I like to think about a hypothetical spreadsheet. Each row of the spreadsheet corresponds to some unit or example. But I don’t care what the units mean. I just know that I have a bunch of columns filled in with data. And I’m told one of the columns is special. I am about to get a load of new rows in the spreadsheet, but someone downstairs forgot to fill in the special column. Management has tasked me with writing a formula to fill in what should be there. For whatever reason, I don’t get to see these new rows and have to build the formula from the spreadsheet I have. The formula can use all sorts of spreadsheet operations: It can assign weights to different columns and add up the scores, it can use logical formulas based on whether certain columns exceed particular values, it can divide and multiply. … I’ll do an experiment. I’ll take the last row of my spreadsheet and pretend I don’t have the special column. I’ll write as many formulas as I can. … But why single out that last row? I can do something similar for every row! I’ll invent a set of plausible functions. I’ll evaluate how well they predict on the spreadsheet I have. I’ll choose the function that maximizes the accuracy. This is more or less the art of machine learning.

Guessing the missing rows of spreadsheets and optimizing turns out to have a lot of useful applications: not just language models, but protein folding, recognizing handwriting and a myriad of other applications. Equally, machine learning is just another form of optimization and/or prediction. Very large chunks of Silicon Valley’s current business model involve taking complex situations that don’t look like optimization or prediction problems, simplifying and redescribing them and then finding solutions.

Just like statistics, there is a “sweet spot” for machine learning. It is not useful for situations where you have a genuinely clean mathematical abstraction, which you can turn into running code. Nor is it useful for situations that are too messy or complicated to be predictable (it is, after all, an application of statistical technique). You want to use it in the intermediary situations where there isn’t an obvious neat solution, but where the clunky and computationally expensive techniques of machine learning can discover a useful approximation, even if you may not understand quite what it is based on or how it works.

*******

All this implies some important problems of evaluation. How can you tell where machine learning is a useful way to proceed? How can you tell which machine learning approach is the best one to apply for a given problem? And behind all this lurks the bigger question that we began with. How can you tell when machine learning techniques in general (or other rationalist shortcuts) are better or worse than ordinary human judgment?

The answer to the first is unfortunately indeterminate. As best as I understand Ben’s argument, the only real way to discover whether machine learning works for a given kind of problem is to come up with a working machine learning solution. There is no genuinely satisfactory ex ante way to distinguish between the problems that machine learning can solve for, and those that they can’t. Furthermore, as Ali Rahimi and Ben have noted elsewhere, AI practitioners rely more on “alchemy” than a deep understanding of why some approaches work and others don’t. More succinctly, XKCD:

The pile gets soaked with data and starts to get mushy over time, so it's technically recurrent.

As for how to tell which machine learning algorithms work better than others, computer scientists have come up with an approach commonly called the Common Task Framework (or variants thereon). Create a common dataset (canonically: photos of cats and dogs) and share all (or far more usually these days) some of it with different teams of researchers. Then come up with a common task that can be performed on the data and can evaluated in a fairly straightforward way (can the algorithm distinguish between cats and dogs). Then the different teams can come up with algorithms that work competitively against each other, which perhaps can be tested on data that has not been shared publicly, to ward against overfit and teaching to the test. The algorithm which works best (say; has the highest percentage accuracy in distinguishing between cats and dogs) is, ipso facto, the best algorithm for the task.

And this gets us to one of the major contributions of Ben’s book. A lot of people in AI claim that we can apply this framework to answer a very big question. Are AI algorithms generally superior to human beings at performing some set of cognitive tasks? There are a variety of common task framework tests that purport to do this, some with names that … beg questions. If you are hanging around the right (or wrong) places on the Internet, you will regularly read this or that excitable claim that humanity is doomed to be superseded because of the performance of AI on this or that test.

Ben suggests that such claims tend to make a fundamental error. He describes some famous results from the research of psychologist Paul Meehl on medical and other decisions, which suggested that “statistical prediction provided more accurate judgments about the future than clinical judgments” under certain conditions. But the conclusion that Ben comes to is not that this means that statistical prediction is generally better than expert judgment. Instead, it is better when there are clearly defined outcomes, good data, and clear reference cases that can be used for comparison. There are many situations in which this is not true, and cannot readily be made true. There are also situations in which it cannot.

If we use common task type approaches to measure success, we are loading the dice in favor of those tasks that can be described in terms of clear outcomes, and tested with good data, and loading them against those tasks that do not have such nice characteristics. Ben describes this even more pungently. Tasks that can be defined in those ways are definitionally the tasks that computer or other automated approaches will be able quickly to do better than human beings. Paradoxically:

If we can measure why humans might be able to outperform machines, then we can build machines to outperform people. On the other hand, if we can’t cleanly articulate a clean set of actions, outcomes, measurements, and metrics, then we can’t mechanize problem solving. It is this digitization, translating the world into the language of the computer, that is needed to automate.

The universe of tasks with clear goals, conditions and data is both the universe of tasks that are easily measured and the universe of tasks that computers and automated processes can carry out well. The one characteristic more or less predicts the other. This, then, is what makes it so hard for mathematical rationalists to see the limitations to their perspective. The tools and measures that they use to understand and solve problems could almost have been purpose crafted to confirm their broad intellectual biases by concealing the problems that their methods can’t easily solve.

*******

This helps us to situate the debate that is happening right now about AI. There are many AI enthusiasts, who believe that it can be applied to do pretty well any task that humans can do as well as the humans or better. Getting to this is just a matter of scaling and engineering, and is going to happen Real Soon Now. There are AI skeptics, who argue that its benefits are limited to a narrow range of well defined tasks, or even (I see the claim regularly, though it is rarely defended in any particularly sophisticated way) that the benefits are non-existent. These positions often map onto “AI good” and “AI bad,” along the lines that AI suggests.

As per the quote at the beginning of this post, Ben doesn’t really engage with the question of whether AI is good or bad in any general sense. Instead, he proposes that it can carry out many tasks, including tasks that we might not anticipate right now, but that there are limits. AI, like mathematical rationality more generally, has a sweet spot: problems that are complicated enough that they can’t be solved by other computationally cheaper approaches, but that have enough regularities to be workable. Within that sweet spot, it can do extraordinary things. Outside the sweet spot, it may be redundant or completely useless. And there is an ambiguous zone in between, where it can do stuff but imperfectly.

It isn’t possible, except in very general terms, to define ex ante what the sweet spot is. Clever engineers are perpetually trying to expand it. Self-driving cars provide one example of a problem that has proved far harder to solve than engineers thought (as Ben puts it “we don't know how to articulate 'good driver' into a clean statistical outcome”), but they are brute-forcing the problem so that self-driving is far more plausible across different environments than it used to be. Equally, there are many, many edge cases. One way to deal with many of them might be to try to simplify them out of existence through e.g. having only self-driving cars without the unpredictabilities of idiosyncratic human drivers, or cyclists, or … or … or). Such simplification is a version of what management cyberneticists call ‘variety reduction.’

Equally, there are challenges that appear to be fundamentally resistant to mathematical rationality, including bureaucracy and politics:

societies are not computer chips. While I noted in chapter 2 that computer chips were often analogized as microscopic cities, chips were always designed to be hermetically sealed and perfectly controlled. This is what made them optimizable. Real societies, on the other hand, had people. While it’s convenient to model and view the population, its health, and its market flows as mathematical abstractions, these run into the limits of the messiness that people bring to bear.

In The Sciences of the Artificial, Herbert Simon makes a closely related argument:

When we come to the design of systems as complex as cities, or buildings, or economies, we must give up the aim of creating systems that will optimize some hypothesized utility function, and we must consider whether differences in style of the sort I have just been describing do not represent highly desirable variants in the design process rather than alternatives to be evaluated as “better” or “worse.” Variety, within the limits of satisfactory constraints, may be a desirable end in itself, among other reasons, because it permits us to attach value to the search as well as its outcome—to regard the design process as itself a valued activity for those who participate in it.

We have usually thought of city planning as a means whereby the planner’s creative activity could build a system that would satisfy the needs of a populace. Perhaps we should think of city planning as a valuable creative activity in which many members of a community can have the opportunity of participating—if we have wits to organize the process that way.

As per James Scott’s Seeing Like a State, the problems begin when technocrats begin to treat human beings and the complex societies they create as though they were simplified “standard cells” that can readily be re-arranged in more optimal patterns. Moreover, as Ben says elsewhere (Cosma and I quote this in our own forthcoming piece on AI and bureaucracy), political disagreement generally resists optimization. When you have incommensurable tradeoffs (even very simple ones: should you use money in your budget to pay for a playground to make parents happy or a fire station to make it less likely that businesses will burn down), you have moved decisively away from the kinds of problems that machine learning, or optimization more generally, can simplify in useful ways.

As soon as we can’t agree on a cost function, it’s not clear what our optimization machinery … buys us. Multi-objective optimization necessarily means there is a trade-off. And we can’t optimize a trade-off.

Barring the development of radically different approaches, there is no reason to believe that politics will come into the sweet spot. But many mathematical rationalists argue otherwise (e.g. this set of claims, which maybe deserve their own extended response). If you want to really understand the limits on AI, you owe it to yourself to read Ben’s book. There are many books on technology that are smart in some sense, but very few that are wise. This is one of those few exceptions.

Subscribe now

Read the whole story
mrmarchant
2 hours ago
reply
Share this story
Delete

The Image Boards of Hayao Miyazaki

1 Share
A proto-Totoro image board by Hayao Miyazaki, courtesy of The Art of My Neighbor Totoro

Welcome! This is a new issue of the Animation Obsessive newsletter, and here’s our plan this Sunday:

  • 1. Miyazaki’s concept sketches.

  • 2. Animation newsbits.

Now, let’s go!

1. Ideas on paper

Watching The Boy and the Heron, back in 2023, wasn’t a theater experience like any we’ve had. We were a little speechless when we stood up to go — the credits rolling, white letters on blue. A stranger seated toward the front row had clapped at the ending. Mostly, people were quiet.

At age 82, Hayao Miyazaki had reinvented himself again. It was hard to find the director of My Neighbor Totoro in The Boy and the Heron, just as the link between Totoro and The Castle of Cagliostro (1979) had been faint. Miyazaki’s changed and adapted since his career began at Toei Doga in 1963, more than six decades ago.

What’s stayed consistent is his habit of sketching ideas. His “image boards.”

“An image board is something drawn to prepare for a work,” Miyazaki once explained. They aren’t storyboards — they’re for loose ideas, not strict continuity. He did his first image boards at Toei: “I myself started naturally [drawing them] with Horus.”1

Horus: Prince of the Sun (1968) was the debut film by the late Isao Takahata — Miyazaki’s close friend, rival, sounding board, foil and in some ways mentor. Miyazaki was a major artist on Horus, and one of several who drew its image boards. As he said:

Because they decided the overall feel of the film, and were the material that determined the direction of the story, it was necessary to draw as many as possible. I drew quickly and simply with a pencil and glossed it over with a single color — because it was the process of searching for a direction, I didn’t want to expend a lot of effort on each piece. I used to draw larger, but it got more and more troublesome, so they became smaller and smaller.

Some of Miyazaki’s image boards for Horus: Prince of the Sun (1968), courtesy of Little Norse Prince Valiant Roman Album

Loan words from English make up the Japanese term “image board” (imēji bōdo) — but the meaning of “image” drifted in translation. In Japanese, imēji refers to something more like a mental image, an impression, an idea.

Which is to say that an image board is concept art.

That’s how Miyazaki’s endless Horus pieces were used. He noted that they were stuck to the studio’s walls to give everyone a feel for the project. “People who were participating in the preparation and those who weren’t could freely take a look at it, and go, ‘This is going to be interesting,’ or, ‘This is no good,’ ” Miyazaki remembered.

His Horus drawings could be rough, as if he’d jabbed them onto the page in a frenzy. He was young. His fellow artists on the film, like Yoichi Kotabe, often did cleaner work. But Miyazaki’s sketches had an energy, a range and a cinematic eye that jumped out. Recalling his own time on Horus, Yasuo Otsuka wrote, “[N]o matter how much I drew, my drawing skill couldn’t match Miyazaki-san’s.”2

Soon, Miyazaki’s drawing ability matured further, and that roughness became a purposeful looseness. Horus began in 1965 and premiered in ‘68, and Miyazaki remained at Toei Doga for a few more years. He contributed concept art to Animal Treasure Island (1971) and others — visibly growing as an artist.

Miyazaki image boards for Animal Treasure Island (1971). Courtesy of Future Boy Conan: Film 1/24 Special Issue.
Miyazaki image boards for Pippi Longstocking. Courtesy of The Phantom Pippi Longstocking.

By 1971, when Miyazaki left Toei to work on Takahata’s unmade Pippi Longstocking, his image boards had real charm and warmth. The art doesn’t aim for perfection, but the character and atmosphere in it make the project feel real.

Image boards like these poured out of Miyazaki during the 1970s. He drew them for actual productions, pitches, pipe dreams. He was establishing himself as an expert at inventing and fleshing out animated worlds — initially, Takahata’s worlds. Along the way, he openly reused and reimagined his own past sketches. Tons of visual ideas from Pippi were recycled outright in Takahata’s Panda! Go, Panda! (1972).

A Miyazaki memory of the Panda era:

… when I read the wording of the proposal that Isao Takahata-san had quickly written up, I felt my heart swell in anticipation — I could create a wonderful world. The excitement remained without dissipating.3

Miyazaki image boards for Panda! Go, Panda! — courtesy of Panda! Go, Panda! Fan Book

Beginning with Heidi: Girl of the Alps (1974), Miyazaki’s main work under Takahata shifted to layouts. These set up the camera, backgrounds and animation, and the workload was so huge that Miyazaki couldn’t contribute to the creative side. “Heidi was entirely Paku-san’s world,” Miyazaki said, using Takahata’s nickname. He was often too busy drawing to attend Heidi’s meetings.4

On Takahata projects like 3,000 Leagues in Search of Mother (1976), Miyazaki grew more and more frustrated. They felt like a grind to him. “After Heidi, I didn’t have my whole heart in my work,” he said. Takahata’s shows were moving into naturalism and objectivity, and away from the fantasies of Pippi and Panda.5

In this era, Miyazaki felt that he “lost sight of [his] own themes.” On the side, though, he kept coming up with ideas and putting them down in image boards, even if they didn’t wind up on TV.

Around then, in the mid-1970s, he sketched a certain visual for My Neighbor Totoro. It was the bus stop scene in the rain.6

Miyazaki dreamed of making something cartoony, fun and wild again, and it manifested on the page. Totoro was born as an extrapolation of Panda. “Panda is a very big-hearted, easygoing character,” Miyazaki later wrote. “He makes those around him happy just by being there, without doing anything in particular. In that respect Totoro and Panda are similar for me.”

From there, Miyazaki’s image boards grew, and he became a better and better artist. Yet, at this early stage of his career, he’d already used them to define his most iconic visual.

Early image boards for the project that would become My Neighbor Totoro. Courtesy of The Art of My Neighbor Totoro.
Image boards for Future Boy Conan, courtesy of the Conan Blu-ray

Ultimately, Miyazaki’s frustrations forced him to quit working with Takahata. He moved to the TV series Future Boy Conan (1978), his directorial debut. And a torrent of his suppressed ideas emerged.

He drew an almost scary number of image boards for Conan, all zany, off-the-wall creativity. His goal was to build on the bright-eyed fantasies of his childhood years. Back then, animation carried the old-fashioned name manga eiga in Japan, and its representatives were cartoons like Fleischer’s Mr. Bug Goes to Town (1941).

“[W]hen I worked on Future Boy Conan, I did not try to make ‘animation’ as we usually think of it, but a manga, or a cartoon film,” he noted.7

Although satisfying to do, Conan was a middling performer in ratings. Miyazaki bounced from there to his first feature, The Castle of Cagliostro — another attempt at manga eiga. It flopped. He later recalled:

The Castle of Cagliostro was like a clearance sale of all I had done on Lupin and during my early Toei days. I don’t think I added anything new. I can understand why people who had followed my work were extremely disillusioned. You can’t use a sullied middle-aged guy to create fresh work that will wow viewers. I realized I should never do this again. Neither did I want to. Even so, I did two more (television series New Lupin episodes 145, 155), and it was hell. With every piece I made it was obvious that I was just trotting out everything I had done before. [laughs] Nineteen eighty was my year of being mired in gloom.

Miyazaki image boards for The Castle of Cagliostro (1979), courtesy of Film 1/24

Miyazaki was about to turn 40, and he felt washed up. But his creativity hadn’t really run out. In a sense, it was only then being born.

His image boards continued in this era: a thousand ideas raced and morphed in his head. As mentioned in Nausicaä of the Valley of the Wind: Watercolor Impressions, “The drawings Miyazaki accumulated between 1980 and 1982 formed the basis of all his work, from Nausicaä of the Valley of the Wind to Princess Mononoke.”

Some of these drawings appeared in the book Hayao Miyazaki Image Board (1983), published when Miyazaki’s career was still struggling. He was involved in the Sherlock Hound TV series in the early ‘80s — it fell apart after a few episodes, only to be finished by another team. And he got sucked into the vortex of Little Nemo, a feature film that Hollywood wanted to make in collaboration with Japan.

“I ended up trying out all the different motifs I’d been carrying around with me,” Miyazaki said of Nemo.8 Each and every one of them was rejected. But, in the process, he even further developed the ideas in his image boards. This new trove started to be unleashed in Nausicaä (1984), a film far removed from the manic adventure-comedies he’d gotten known for.

Once again, his evolution showed in his sketches before it hit the screen. Miyazaki had gained new reference points after Cagliostro — like Heavy Metal magazine artists Mœbius and Richard Corben (Rowlf). Their stuff, with its underground-comix leanings, seeped into his image boards across the early ‘80s.9

Around the same period, he saw the animation of Frédéric Back and Yuri Norstein, in which he felt a richness beyond anything he knew from mainstream Japanese animation. It made him feel inadequate, and pushed him. The sense that his films were “manga,” or cartoons, began to put him “in a bad mood.”10

When Nausicaä reached theaters, it wasn’t simply a cartoon anymore. And that turn had started in his image boards. Soon, one of Miyazaki’s stray sketches of a floating castle, done in the early ‘80s, became the blueprint for Castle in the Sky (1986). In My Neighbor Totoro, two years later, even his bus stop drawing was realized on film — just deeper and richer now.

Images boards for Sengoku Majo, an early-1980s grouping of ideas that Miyazaki later turned into everything from Nausicaä to Castle in the Sky and beyond. Note the U-shaped arrowhead in the bottom image. Courtesy of Nausicaä Prehistory and Nausicaä of the Valley of the Wind: Watercolor Impressions.
Image boards for another proto-Nausicaä idea — Kushana and the Earth Dragon. Courtesy of Nausicaä Prehistory and Nausicaä of the Valley of the Wind: Watercolor Impressions.
A couple of Miyazaki image boards for Little Nemo. Courtesy of the book The Aspirations of Little Nemo by Yasuo Otsuka.

Since the 1960s, Miyazaki has kept to the same pencil-and-watercolor approach, plus the occasional inks, for his image boards. It’s casual, comfortable and fast. To him, he’s just jotting things down. As he said in the ‘90s:

… the idea is not to spend an infinite amount of time creating these things, but to do them as quickly as possible. It’s a totally different approach than most people use when painting with transparent watercolors. In what is completely my own style of doing things, I first draw in pencil, and then quickly trace over that with watercolor, all the while trying to render as many images as possible, with as little effort as possible [laughs], as fast as possible.11

Every Miyazaki movie originates from this jotting habit: his collection, reuse and exploration of ideas on paper. In one proto-Nausicaä image board, you find the same U-shaped arrowhead that appeared over 40 years later in The Boy and the Heron.

He’s always been self-deprecating about the art itself. In the ‘00s, he joked about its crudeness in his guide to watercolors. “Forty years of nothing but this!” yells his pig character. Then a caterpillar mocks him: this method is “all he can do.” A dog later says, “I wonder if he could paint a little more properly.”

But the energy of Miyazaki’s image boards — the loose pencil lines drawn with no erasing, the splashes of watercolor thrown around the page — shouldn’t be underestimated. At Toei, his concept art was sometimes outdone by others’ work. In the end, he took the lead. By the late ‘80s, a powerfully assured technique was visible in these sketches he tossed off so quickly.

Assorted image boards for Princess Mononoke. Courtesy of The Art of Princess Mononoke.

There’s a feeling that the Miyazaki of Princess Mononoke, then in his 50s, could draw whatever he needed to draw. That was true in his storyboards as well. Yet so many of his visuals were image boards first, concept sketches on his sketch pile.

In 2001, the year Spirited Away premiered, Miyazaki turned 60. He’d felt washed up 21 years before, but this became his biggest hit, and most chaotically creative film, by that time. A lot of its success was right there in pencil and watercolor, where Miyazaki’s ideas had never been wilder or more concretely rendered. You saw the same in his sketches for Howl’s Moving Castle and Ponyo.

Every line and form is wobbly, but the impression is sharp, exact and effortless. The artist from Horus, all those decades before, was gone. The page was now an open conduit for Miyazaki’s imagination.

Image boards for Spirited Away. Courtesy of The Art of Spirited Away.
Miyazaki image boards for The Boy and the Heron, courtesy of The Art of The Boy and the Heron
More image boards The Boy and the Heron, courtesy of The Art of The Boy and the Heron

When The Boy and the Heron started, Miyazaki was in his mid-70s. The film took seven years. In 2023, ex-Ghibli animator Kenichi Yoshida said in an interview that he still met with Miyazaki to chat from time to time. It was different from before, though. “Miyazaki… is an old man now; he’s the same generation as my parents,” said Yoshida.

And yet the power of Miyazaki’s ideas had, in many ways, never been stronger. His image boards for The Boy and the Heron show a master’s touch, an absolute clarity of imagination. It’s not the phantasmagoria of Spirited Away, but he could still mesmerize with a sketch, and that lifted the film. An anecdote from the production proves it.

When producer Toshio Suzuki was creating the Japanese poster for The Boy and the Heron, something clicked. It wasn’t just him — even Miyazaki agreed, for once. Like Suzuki said a few years ago:

… I’ve been doing movies since, what, Nausicaä of the Valley of the Wind, and that poster is the first thing Hayao Miyazaki has ever really praised me for. He said, “Suzuki-san, this is amazing.” … He said it was the best I’ve done. That served as a hint. “So, let’s just go with this!” So, no trailers. Absolutely no TV spots. We’ll do it all. No newspaper ads!12

Suzuki’s mysterious, unplaceable poster helped to lead The Boy and the Heron to Ghibli’s biggest opening in Japan. The film went on to sweep the world. And what appeared on the original Japanese poster is a small snippet — carefully cropped and zoomed — of a Hayao Miyazaki image board.13

2. Newsbits

  • We lost Barry Caldwell (68), the veteran storyboarder.

  • There’s discourse in Nigeria about state support for artists. It turns out that “over $600 million in funding is available to Nigerian animators and comic creators, yet the majority of studios have not applied.” See The ACE for more.

  • In America, some of the government’s moves last year to defund PBS and NPR were overturned by the courts.

  • The Palestinian artist Ahmad Adawy won a Mahmoud Kahil Award for his illustrations. His former animation company, Cube Studio, was based in Gaza.

  • There’s a government plan in Armenia to revive the dormant Armenfilm — a vital studio in the country’s Soviet era. Animator Robert Sahakyants made classics like Wow, a Talking Fish! there in the ‘80s.

  • Last weekend in Cuba, the children’s animation workshops hosted by Academia Animaluz continued, despite a power outage due to the American blockade. (For more on current life in Cuba, see these accounts.)

  • Scholar Pavel Shvedov wrote about the loss of the “middle generation” in Russian auteur animation. The recent Suzdalfest was populated mostly by newcomers and seasoned veterans; many of the artists in between have left the country or switched focus.

  • An American theater, Metrograph in Manhattan, is screening Czech animation in April — including The Pied Piper and The Revolt of the Toys.

  • We revealed a few months ago that the lost Mexican feature Roy from Space has been found and is being restored. Deaf Crocodile is now crowdfunding the final parts of the release, and there’s a trailer with footage.

  • Fonzieland did an interesting profile on artist Teri Hendrich Cusumano, who spearheaded labor reforms in American animation but has since left the industry.

  • Last of all: a flashback to the Disney Channel’s miniature festival of animation from around the world.

Until next time!

1

From the book Hayao Miyazaki Image Board (1983), used as a source for all of Miyazaki’s comments on Horus.

2

See Otsuka’s essay in Little Norse Prince Valiant Roman Album.

3

From Starting Point (“Panda in Process”). We also used “Panda! Go, Panda! Creator’s Message” from the same collection.

4

See the interviews with Toshitsugu Saida and Miyazaki in Future Boy Conan: Film 1/24 Special Issue (1979). As Miyazaki said:

Until that time, whenever I worked together with Paku-san, without fail, we had detailed discussions about the storylines and the next steps to take, and as we laid there talking and arguing, a common idea welled up in us, and we decided to draw it. That was how it went. However, since around Heidi, just the work on screen design and layout was taking too long. I left the story all to Paku-san, and I ended up just handling the storyboards that came in.

5

See Starting Point (“Miyazaki on His Own Works”), used a few times, plus Takahata’s article in Animage (August 1981) and his long interview about Anne in the book Thoughts While Making Movies.

6

See this article from All the Anime for details.

7

From Starting Point (“On Creating Animation”).

8

This is also from Nausicaä of the Valley of the Wind: Watercolor Impressions.

9

Miyazaki mentioned running into Jean Giraud’s work during 1980 in this interview. More recently, in the foreword to Rowlf and Other Fantasy Stories (2025), he wrote, “I chanced upon Rowlf around the time I was wondering what to do for my next work after I had completed Lupin III: The Castle of Cagliostro.”

10

See Miyazaki’s conversations with Teruo Harada in Monthly Out (September 1983) and Baku Yumemakura in Animage (February 1986).

11

Again from Starting Point (“The Pictures Are Already Moving Inside My Head”).

13

Today’s lead story is a revised and expanded reprint of an article that first ran in our newsletter, behind the paywall, on December 14, 2023.

Read the whole story
mrmarchant
12 hours ago
reply
Share this story
Delete

Winners of the 2026 Kokuyo Design Awards

1 Share
The Kokuyo Design Awards, (previously) arguably Japan’s most-prestigious stationery design award, has been held for almost a quarter of a century now. Hosted by 120-year old stationery firm KOKUYO, the award receives close to 1500 entries each year for new products that have yet to be commercialized, with winning concepts given the opportunity to become […]
Read the whole story
mrmarchant
22 hours ago
reply
Share this story
Delete

Not Normal

1 Share

A pair of broken off statue legs, shod in Roman sandals, atop a cliff. Behind them, we see a futuristic city.

This week on my podcast, I read Not Normal, my latest Locus Magazine column, about the surreal and terrible world we’ve been eased into thanks to anti-circumvention laws.


If you were paying attention in 1998, you could see what was coming. Computers were getting much cheaper, and much smaller. From cars to toast­ers, from speakers to TVs, we were shoveling them into our devices. and an it doesn’t take a lot of expense or engineering to add an “access control” to any of those computers.

That meant that DMCA 1201 was about to metastasize. Once you put a computer into a thermostat or a bassinet or a stovetop or a hearing aid, you can add an access control and make it a felony to use it in ways the manufac­turer disprefers. You can make it illegal to use cheap batteries, or a different app store. You can add little chips to parts – everything from a fuel pump to a touchscreen – and make it illegal to manufacture a working generic part, because the generic part has to bypass the “access control” in the device that checks to see whether it’s the manufacturer’s own part.

MP3

Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete

CBP facility codes sure seem to have leaked via online flashcards

1 Share

A user on Quizlet, an online learning platform, created a public flashcard set in February that appears to have exposed highly confidential information about security procedures in US Customs and Border Protection facilities around Kingsville, Texas.

The Quizlet set, titled “USBP Review,” was available to the public until March 20, when it was made private less than half an hour after WIRED messaged a phone number potentially linked to the Quizlet user. Though an individual with the user’s name was listed at an address of an apartment less than a mile from a Kingsville CBP facility, WIRED has not been able to verify that the flashcard set was created by an active CBP agent or contractor.

“This incident is being reviewed by CBP’s Office of Professional Responsibility,” a CBP spokesperson wrote in a statement to WIRED. “We will not be getting ahead of this review. A review should not be taken as an indication of wrongdoing.”

Read full article

Comments



Read the whole story
mrmarchant
1 day ago
reply
Share this story
Delete
Next Page of Stories