1387 stories
·
1 follower

You’re in a computer literacy filter bubble

1 Share
You’re in a computer literacy filter bubble

Skill-testing question: what’s wrong with this snippet?

user@host:~$ rm $FILENAME

You’re still reading. That means it doesn’t look scary. Many of you have already jumped to: this is a code snippet > this is a terminal command > this is probably Bash or POSIX shell > it’s deleting a file > $ doesn’t mean the file is expensive, it means $FILENAME is a variable > the variable is unquoted (!) > this snippet demonstrates one of the most basic and dangerous Bash footguns (see appendix)

You’re still reading. Maybe you understood the above paragraph, or maybe not, but even if you didn’t, you were interested enough to try to understand it, and the fact you’re here means you can fire up a modern computing device, put in a password, connect to the internet, browse to an obscure tech blog, scroll, and absorb strange words into your brain.

You are not normal.

Someone I know started teaching a podcasting course at a better-than-average high school in one of the richest parts of one of the richest countries. After his first day, he told me, with wide eyes, some of the students don’t know how to move or copy a file.

I was shocked. I thought we were the Luddites, and younger generations were running circles around us in the tech literacy department. Some of them are. And most of them are able to operate smartphones. But, copying a file? This is not something human beings are born knowing how to do, and, I now know, not everyone learns how in the natural course of their upbringing.

(Taking what we know for granted plagues all fields, and there’s a name for it: the curse of knowledge. And, of course, a relevant XKCD.)

My technical literacy filter bubble is Hacker News and Lobsters, where it seems like everyone is smarter and more hardcore than I am. I’ve dabbled in Python, Go, Rust, Bash, C, PHP, and JavaScript, but I don’t feel like a real programmer, because I mostly write scripts and websites. Real programmers are the ones that write compilers and desktop applications in Lisp and Zig, and run BSD or QubesOS. Real programmers are the people that are eternally one step ahead of where I am.

If your mother uses Linux, you’re probably in a computer literacy filter bubble.

If you have happy memories of booting from a treasured Live CD and playing Nibbles for Knoppix on a CRT monitor, and both parents programmed mainframes using punch-cards and FORTRAN, and you prefix anything than involves the terminal with “just”, you’re definitely in a computer literacy filter bubble.

Filter bubbles aren’t inherently bad*.* And there are countless filter bubbles that we aren’t in. I’m not in the mechanical competence filter bubble, and I bet most of us aren’t in the fashion filter bubble. Even if we wanted to, I’m not sure it’s possible to fully escape a tech literacy filter bubble while being tech literate and surrounding yourself with people who are more tech literate. But I think it’s healthy to deliberately poke our heads out of whatever bubbles we’re in once in a while, take in the view, and realize how different it is out there.

Those of us reading, and writing, blogs that routinely talk about writing shell scripts to save keystrokes, measuring brainwaves, preventing servers from crashing, and modifying the software running on embedded devices are so out of touch with the median level of technological literacy, we forget that just knowing how to double-click is a privilege.

Note: this post is part of #100DaysToOffload, a challenge to publish 100 posts in 365 days. These posts are generally shorter and less polished than our normal posts; expect typos and unfiltered thoughts! View more posts in this series.

Read the whole story
mrmarchant
14 hours ago
reply
Share this story
Delete

Do Commodities Get Cheaper Over Time?

1 Share

This American Enterprise Institute chart, which breaks down price changes for different types of goods and services in the consumer price index, has by now become very widely known. A high-level takeaway from this chart is that labor-intensive services (education, healthcare) get more expensive in inflation-adjusted terms over time, while manufactured goods (TVs, toys, clothing) get less expensive over time.

But there are many types of goods that aren’t shown on this chart. One example is commodities: raw (or near-raw) materials mined or harvested from the earth. Commodities have many similarities with manufactured goods: they’re physical things that are produced (or extracted) using some sort of production technology (mining equipment, oil drilling equipment), and many of them will go through factory-like processing steps (oil refineries, blast furnaces). But commodities also seem distinct from manufactured goods. For one, because they’re often extracted from the earth, commodities can be subject to depletion dynamics: you run out of them at one location, and have to go find more somewhere else. In my book I talk about how iron ore used to be mined from places like Minnesota, but as the best deposits were mined out steel companies increasingly had to source their ore from overseas. And the idea of “Peak Oil” is based on the idea that society will use up the easily accessible oil, and be forced to obtain it from increasingly marginal, expensive-to-access locations.

(Some commodities, particularly agricultural commodities that can be repeatedly grown on a plot of land, don’t have the same sort of depletion dynamics, though bad farming practices can degrade a plot of land over time. Other commodities get naturally replenished over time, but can still get used up if the rate of extraction exceeds the rate of replenishment; non-farmed timber harvesting and non-farmed commercial fishing come to mind as examples.)

Going into this topic, I didn’t have a great sense of what price trends look like for commodities in general. Julian Simon famously won a 1980 bet with Paul Ehrlich that several raw materials — copper, chromium, nickel, tin, and tungsten — would be cheaper (in inflation-adjusted terms) after 10 years, not more expensive. But folks have pointed out that if the bet had been over a different 10-year window, Ehrlich would have won the bet.

To better understand how price tends to change for different commodities and raw materials, I looked at historical prices for over a hundred different commodities. Broadly, agricultural commodities tend to get cheaper over time, while fossil fuels have a slight tendency to get more expensive. Minerals (chemicals, metals, etc.) have a slight tendency towards getting cheaper, with a lot of variation — 15 minerals more than doubled in price over their respective time series. But this has shifted over the last few decades, and recently there’s been a greater tendency for commodities to rise in price.

Analyzing commodity prices

To get long-term commodity prices, I used a few different sources of data. For agricultural products, I used U.S. Department of Agriculture data, which has price data for various crops going back (in some cases) to the 19th century, and for various meats going back to 1970. For minerals, I used U.S. Geological Survey data, which gives historical statistics (including prices) for several dozen minerals, metals, and chemicals. For fossil fuels, I used the Statistical Review of World Energy, this dataset from Jessika Trancik’s lab, and datasets from the U.S. Energy Information Administration. Altogether, I looked at 124 different commodities.

Let’s start by looking at fossil fuels. The graphs below show the price of oil, natural gas, and bituminous coal in 2024 dollars.

The most visible pattern here is the large number of price spikes. The 1970s energy crisis, where prices rose by a factor of three or more and then declined, is the most obvious, but it’s far from the only one — there’s another huge spike in the early 2000s. There’s some tendency for fossil fuel prices to rise long-term, particularly post-energy crisis, but there’s a lot of variation. The price of natural gas, for instance, has generally been declining since the early 2000s, and all fossil fuels have long periods of time over which their price declines. In addition to its post-2000s decline, the price of natural gas declined from the 1920s through the 1940s, and the price of oil generally declined over the 100-year period from the late 1860s through the early 1970s.

Now let’s look at agricultural commodities. The graphs below show the inflation-adjusted price for 25 different crops grown in the U.S.

Here we see the same year-to-year variation in price as we do in fossil fuels, but unlike fossil fuels there’s also a very strong tendency for agricultural commodities to fall in price over time. 24 of the 25 crops examined have lower inflation-adjusted prices today than at the beginning of their time series (tobacco is the single exception). For 17 out of 25, the price decline is greater than 50%.

If we just look at recent price trends, the trend of falling prices is less strong, but still there. 20 of the 25 crops are cheaper today than they were in 1990. (Barley, oats, rye, and Durum wheat are more expensive. The sweet potato dataset stops in 2017, but the price was lower that year than in 1990.)

However, if you look just since 2000, the trend has reversed: only four crops (cotton, peanuts, tobacco, and sweet potatoes) fell in price in real terms since then.

What about meat? The graph below shows the inflation-adjusted price for pork, beef, and chicken over time in the U.S.

The price of chicken has declined since 1980. The price levels of beef and pork declined from 1970 to the mid-1990s, but since then they have been rising. The price of pork is up 15% since 1995, and the price of beef is up 41%.

Now let’s look at minerals. The graphs below show the price of 93 different mineral commodities — industrial metals like aluminum and steel, precious metals like gold and platinum, chemicals like nitrogen and hydrogen, and various other minerals such as graphite, bentonite, gypsum.

And here’s high-value minerals:

We see a range of different price trends here: some minerals (gold, platinum, molybdenum) have risen in price over time, while others (aluminum, gypsum, magnesium) have gotten cheaper. But at a high level, the trend is towards commodities getting cheaper over time. Of these 93 mineral commodities, 60 of them got cheaper between the beginning and end of their time series. In 36 of the 93, the price decline was greater than 50%, and in 10 of them the decline was greater than 90%.

As with agricultural commodities, this trend has gotten weaker recently. For the 75 commodities which have data over the 1990 to 2020 period, only 39 (slightly more than half of them) have gotten cheaper over that period; the other 36 have gotten more expensive.

So at a high level, historically most commodities were getting cheaper over time, but in recent decades this has been significantly less true. Beef, pork, oil, natural gas, copper, construction sand, and phosphate for fertilizer are all commodities that formerly were consistently falling in price but for the last several decades have been getting more expensive.

Another way to get a sense of what commodity price trends generally look like is to do something similar to the Simon-Erlich wager and look at aggregate price changes over particular windows of time. To do this, I divided the dataset for each commodity into 20-year chunks (starting from 1860), and calculated an equivalent annual rate of change over each window. So for iron ore, which has prices from 1900 through 2021, there would be six price windows: 1900 to 1919, 1920 to 1940, and so on. The price for iron ore was $47.45 per ton in 1900 and $33.02 in 1919, giving an equivalent annual change of about -2% for that window.1 This gave me over 600 different commodity price changes, which are shown on the histogram below:

Looking at all commodities together, we can see the overall tendency for prices to decline over 20-year periods of time, but plenty of periods where commodities rise in price. You can see this from the left-skew on the graph, indicating a greater tendency towards price declines than price rises. This tendency towards declining prices is true for each category of commodity except fossil fuels, which have a very slight tendency to rise in price over time.

However, if we just look at the most recent window of time, 2000 to 2020, we instead see a right-skew, a tendency towards rising prices:

Conclusion

To sum up: historically commodities have generally fallen in price over time, but recently this trend has increasingly shifted towards rising prices. Natural gas and oil got cheaper until the 1950s and the 1970s, respectively, and since then have gotten more expensive. Beef and pork both got cheaper from 1970 until the 1990s, and since then have risen in price. Agricultural products were almost uniformly falling in price until around 2000, and have almost uniformly risen in price since then.

My general sense looking at historical commodity price data is that the more that production of some commodity looks like manufacturing — produced by a repetitive process that can be steadily improved and automated, from a supply that can be scaled up in a relatively straightforward fashion, without being subject to severe depletion dynamics — the more you’ll tend to see prices fall over time. The biggest decline in price of any commodity I looked at is industrial diamonds, which fell in price by 99.9% between 1900 and 2021d ue to advances in lab-grown diamonds production. This effectively replaced mined diamonds with manufactured ones for industrial uses; roughly 99% of industrial diamonds today are synthetic. Many other commodities had major price declines that were the result of production process improvements — aluminum got cheaper thanks to the invention (and subsequent improvements) of the Hall-Heroult smelting process, titanium’s price declined following the introduction of the Kroll process, and so on. (Steel also got much cheaper following the introduction of the Bessemer process, but that predates USGS price data.) And of course agriculture, which has evolved from crops being harvested manually to being harvested with highly automated, continuous process machinery, closely mirrors the sorts of process improvements we see in manufacturing.

Of course, this trend alone can’t explain changes in commodity prices over time, and there are plenty of commodities — steel, cement, silicon — that are produced in a manufacturing-type operation but which haven’t seen substantially declining prices over their history. And even commodities which resemble manufactured goods have risen in price recently. More generally, there are plenty of things that can shift supply and demand curves to the right or left: cartels, national policies, a spike or collapse in demand, and so on. But the question of “how much, over time, does the production of this commodity resemble a manufacturing process?” seems like a useful lens on understanding the dynamics of commodity prices.

1

Because this calculation might depend on what the boundaries of the price window are, I did a sensitivity analysis by starting at a few different years and seeing how the outcome changed. Changing the window of time doesn’t change the results.

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

The AI-Powered Web Is Eating Itself

1 Share

Suppose you’re craving lasagna. Where do you turn for a recipe? The internet, of course.

Typing “lasagna recipe ideas” into Google used to surface a litany of food blogs, each with its own story: a grandmother’s family variation, step-by-step photos of ingredients laid out on a wooden table, videos showing technique and a long comment section where readers debated substitutions or shared their own tweaks. Clicking through didn’t just deliver instructions; it supported the blogger through ads, affiliate links for cookware or a subscription to a weekly newsletter. That ecosystem sustained a culture of experimentation, dialogue and discovery.

That was a decade ago. Fast forward to today. The same Google search can now yield a neatly packaged “AI Overview,” a synthesized recipe stripped of voice, memory and community, delivered without a single user visit to the creator’s website. Behind the scenes, their years of work, including their page’s text, photos and storytelling, may have already been used to help train or refine the AI model.

You get your lasagna, Google gets monetizable web traffic and for the most part, the person who created the recipe gets nothing. The living web shrinks further into an interface of disembodied answers, convenient but ultimately sterile.

This isn’t hypothetical: More than half of all Google searches in the U.S. and Europe in 2024 ended without a click, a report by the market research firm SparkToro estimated. Similarly, the SEO intelligence platform Ahrefs published an analysis of 300,000 keywords in April 2025 and found that when an AI overview was present, the number of users clicking into top-ranked organic search results plunged by an average of more than a third.

Users are finding their questions answered and their needs satisfied without ever leaving the search platform.

Until recently, an implicit social contract governed the web: Creators produced content, search engines and platforms distributed it, and in return, user traffic flowed back to the creators’ websites that sustained the system. This reciprocal bargain of traffic in exchange for content underwrote the economic, cultural and information-based fabric of the internet for three decades.

Today, the rise of AI marks a decisive rupture. Google’s AI Overviews, Bing’s Copilot Search, OpenAI’s ChatGPT, Anthropic’s Claude, Meta’s Llama and xAI’s Grok effectively serve as a new oligopoly of what are increasingly being called “answer engines” that stand between users and the very sources from which they draw information.

This shift threatens the economic viability of content creation, degrades the shared information commons and concentrates informational power.

To sustain the web, a system of Artificial Integrity must be built into these AI “answer engines” that prioritizes three things: clear provenance that consistently makes information sources visible and traceable, fair value flows that ensure creators share in the value even when users don’t click their content and a resilient information commons that keeps open knowledge from collapsing behind paywalls.

In practical terms, that means setting enforceable design and accountability guardrails that uphold integrity, so AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.

Ruptured System

AI “answer engines” haven’t merely made it easier to find information, they have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.

AI companies have harvested and utilized the creative labor of writers, researchers, artists and journalists to train large language models without clear consent, attribution or compensation. The New York Times has filed lawsuits against OpenAI and Microsoft, alleging that the tech giants used its copyrighted articles for this purpose. In doing so, the news organization claims, they are threatening the very business model of journalism.

In fact, AI threatens the business model of digital content creation across the board. As publishers lose traffic, there remains little incentive for them to keep content free and accessible. Instead, paywalls and exclusive licensing are increasingly the norm. This will continue to shrink the freely available corpus of information upon which both human knowledge and future AI training depend.

The result will be a degraded and privatized information base. It will leave future AI systems working with a narrower, more fragile foundation of information, making their outputs increasingly dependent on whatever remains openly accessible. This will limit the diversity and freshness of the underlying data, as documented in a 2024 audit of the “AI data commons.” 

“The living web is shrinking into an interface of disembodied answers, convenient but ultimately sterile.”

At the same time, as more of what is visible online becomes AI-generated and then reused in future training, these systems will become more exposed to “model collapse,” a dynamic documented in a 2024 Nature study. It showed that when real data are replaced by successive synthetic generations, the tails of the original distribution begin to disappear as the model’s synthetic outputs begin to overwrite the underlying reality they were meant to approximate. 

Think of it like making a photocopy of a photocopy, again and again. Each generation keeps the bold strokes and loses the faint details. Both trends, in turn, weaken our ability to verify information independently. In the long run, this will leave people relying on systems that amplify errors, bias and informational blind spots, especially in niche domains and low-visibility communities.

Picture a procurement officer at a mid-sized bank tasked with evaluating vendors for a new fraud-detection platform. Not long ago, she would have likely turned to Google, LinkedIn or industry portals for information, wading through detailed product sheets, analyst reports and whitepapers. By clicking through to a vendor’s website, she could access what technical information she might need and ultimately contact the company. For the vendor, each click also fed its sales pipeline. Such traffic was not incidental; it was the lifeblood of an entire ecosystem of marketing metrics, job underwriting, marketing campaigns and specialized research.

These days, the journey looks different. A procurement officer’s initial query would likely yield an AI-generated comparison condensing the field of prospects into a few paragraphs: Product A is strong on compliance; product B excels at speed; product C is cost-effective. Behind this synthesis would likely lie numerous whitepapers, webinars and case studies produced by vendors and analysts — years of corporate expertise spun into an AI summary.

As a result, the procurement officer might never leave the interface. Vendors’ marketing teams, seeing dwindling click-driven sales, might retreat from publishing open materials. Some might lock reports behind steep paywalls, others might cut report production entirely and still others might sign exclusive data deals with platforms just to stay visible.

The once-diverse supply of open industry insight would contract into privatized silos. Meanwhile, the vendors would become even more dependent on the very platforms that extract their value.

Mechanisms At Play

The rupture we’re seeing in the web’s economic and informational model is driven by five mutually reinforcing mechanisms that determine what content gets seen, who gets credited and who gets paid. Economists and product teams might call these mechanisms intent capture, substitution, attribution dilution, monetization shifts and the learning loop break

Intent capture happens when the platform turns an online search query into an on-platform answer, keeping the user from ever needing to click the original source of information. This mechanism transforms a search engine’s traditional results page from an open marketplace of links essentially into a closed surface of synthesized answers, narrowing both visibility and choice. 

Substitution, which takes place when users rely on AI summaries instead of clicking through to source links and giving creators the traffic they depend on, is particularly harmful. This harm is most pronounced in certain content areas. High substitution occurs for factual lookups, definitions, recipes and news summaries, where a simple answer is often sufficient. Conversely, low substitution occurs for content like investigative journalism, proprietary datasets and multimedia experiences, which are harder for AI to synthesize into a satisfactory substitute.

The incentives of each party diverge: Platforms are rewarded for maximizing query retention and ad yield; publishers for attracting referral traffic and subscribers; and regulators for preserving competition, media plurality and provenance. Users, too, prefer instant, easily accessible answers to their queries. This misalignment ensures that platforms optimize for closed-loop satisfaction while the economic foundations of content creation remain externalized and underfunded.

Attribution dilution compounds the effect. When information sources are pushed behind dropdowns or listed in tiny footnotes, the credit exists in form but not in function. Search engines’ tendency to simply display source links, which many do inconsistently, does not solve the issue. These links are often de-emphasized and generate little or no economic value, creating a significant consent gap for content used in AI model training. When attribution is blurred across multiple sources and no value accrues without clicks or compensation, that gap becomes especially acute. 

“AI ‘answer engines’ have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.”

Monetization shifts refer to the redirected monetary value that now often flows solely to AI “answer engines” instead of to content creators and publishers. This shift is already underway, and it extends beyond media. When content promoting or reviewing various products and services receives fewer clicks, businesses often have to spend more to be discovered online, which can raise customer acquisition costs and, in some cases, prices. 

This shift can also impact people’s jobs: Fewer roles may be needed to produce and optimize web content for search, while more roles might emerge around licensing content, managing data partnerships and governing AI systems. 

The learning loop break describes the shrinking breadth and quality of the free web as a result of the disruptive practices of AI “answer engines.” As the information commons thins, high-quality data becomes a scarce resource that can be controlled. Analysts warn that control of valuable data can act as a barrier to entry and concentrate gatekeeper power.

This dynamic is comparable to what I refer to as a potential “Data OPEC,” a metaphor for a handful of powerful platforms and rights-holders controlling access to high-quality data, much as the Organization of Petroleum Exporting Countries (OPEC) controls the supply of oil.

Just as OPEC can restrict oil supply or raise prices to shape global markets, these data gatekeepers could restrict or monetize access to information used to build and improve AI systems, including training datasets, raising costs, reducing openness and concentrating innovation power in fewer hands. In this way, what begins as an interface design choice cascades into an ecological risk for the entire knowledge ecosystem.

The combined effect of these five mechanisms is leading to a reconfiguration of informational power. If AI “answer engines” become the point of arrival for information rather than the gateway, the architecture of the web risks being hollowed out from within. The stakes extend beyond economics: They implicate the sustainability of public information ecosystems, the incentives for future creativity and the integrity of the informational commons.

Left unchecked, these forces threaten to undermine the resilience of the digital environment on which both creators and users depend. What is needed is a systemic redesign of incentives, guided by the framework of Artificial Integrity rather than artificial intelligence alone.

Artificial Integrity

Applied to the current challenge, Artificial Integrity can be understood across three dimensions: information provenance integrity, economic integrity of information flows and integrity of the shared information commons.

Information provenance integrity is about ensuring that sources are visible, traceable and properly credited. This should include who created the content, where it was published and the context in which it was originally presented. The design principle is transparency: Citations must not be hidden in footnotes. 

Artificial Integrity also requires that citations carry active provenance metadataa verifiable, machine-readable signature linking each fragment of generated output to its original source, allowing both users and systems to trace information flows with the same rigor as a scientific citation. 

That introduces something beyond just displaying source links: It’s a systemic design where provenance is cryptographically or structurally embedded, not cosmetically appended. In this way, provenance integrity becomes a safeguard against erasure, ensuring that creators remain visible and credited even if the user doesn’t click through to the original source.

Economic integrity of information flows is about ensuring that value flows back to creators, not only to platforms. Artificial Integrity requires rethinking how links and citations are valued. In today’s web economy, a link matters only if it is clicked, which means that sources that are cited but not visited capture no monetary value. In an integrity-based model, the very act of being cited in an AI-generated answer would carry economic weight, ensuring that credit and compensation flow even when user behavior stops at the interface.

This would realign incentives from click-chasing to knowledge contribution, shifting the economy from performance-only to provenance-aware. To achieve this, regulators and standards bodies could require that AI “answer engines” compensate not only for traffic delivered, but also for information cited. Such platforms could implement source prominence rules so that citations are not hidden in footnotes but embedded in a way that delivers measurable economic value. 

Integrity of the shared information commons is about ensuring that the public information base remains sustainable, open and resilient rather than degraded into a paywalled or privatized resource. Here, Artificial Integrity calls for mandatory reinvestment of AI platform revenues into open datasets as a built-in function of the AI lifecycle. This means that large AI platforms such as Google, OpenAI and Microsoft would be legally required to dedicate a fixed percentage of their revenues to sustaining the shared information commons. 

“AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.”

This allocation would be architecturally embedded into their model development pipelines. For example, a “digital commons fund” could channel part of Google’s AI revenues into keeping resources like Wikipedia, PubMed or open academic archives sustainable and up to date. Crucially, this reinvestment would be hardcoded into retraining cycles, so that every iteration of a model structurally refreshes and maintains open-access resources alongside its own performance tuning. 

In this way, the sustainability of the shared information commons would become part of the AI system’s operating logic, not just a voluntary external policy. In effect, it would ensure that every cycle of AI improvement also improves the shared information commons on which it depends, aligning private platform incentives with public information sustainability.

We need to design an ecosystem where these three dimensions are not undermined by the optimization-driven focus of AI platforms but are structurally protected, both in how the platforms access and display content to generate answers, and in the regulatory environment that sustains them.

From Principle To Practice

To make an Artificial Integrity approach work, we would need systems for transparency and accountability. AI companies would have to be required to publish verifiable aggregated data showing whether users stop at their AI summaries or click outward to original sources. Crucially, to protect the users’ privacy, this disclosure would need to include only aggregated interactions metrics reporting overall patterns. This would ensure that individual user logs and personal search histories are never exposed. 

Independent third-party auditors, accredited and overseen by regulators much like accounting firms are today, would have to verify these figures. Just as companies cannot self-declare their financial health but must submit audited balance sheets, AI platforms would no longer be able to simply claim they are supporting the web without independent validation.

In terms of economic integrity of information flows, environmental regulation offers a helpful analogy. Before modern environmental rules, companies could treat pollution as an invisible side effect of doing business. Smoke in the air or waste in the water imposed real costs on society, but those costs did not show up on the polluter’s balance sheet.

Emissions standards changed this by introducing clear legal limits on how much pollution cars, factories and power plants are allowed to emit, and by requiring companies to measure and report those emissions. These standards turned pollution into something that had to be monitored, reduced or paid for through fines and cleaner technologies, instead of being quietly pushed onto the public. 

In a similar way, Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources. An integrity threshold could simply be a clear numerical line, like pollution limits in emissions standards, that marks the point at which an AI platform is taking too much value without sending enough traffic or revenue back to sources. As long as the numbers stay under the acceptable limit, the system is considered sustainable; once they cross the threshold, the platform has a legal duty to change its behavior or compensate the creators it depends on.

This could be enforced by national or regional regulators, such as competition authorities, media regulators or data protection bodies. Similar rules have begun to emerge in a handful of jurisdictions that regulate digital markets and platform-publisher relationships, such as the EU, Canada or Australia, where news bargaining and copyright frameworks are experimenting with mandatory revenue-sharing for journalism. Those precedents could be adapted more broadly as AI “answer engines” reshape how we search online.

These thresholds could also be subject to standardized independent audits of aggregated interaction metrics. At the same time, AI platforms could be required to provide publisher-facing dashboards exposing the same audited metrics in near real-time, showing citation frequency, placement and traffic outcomes for their content. These dashboards could serve as the operational interface for day-to-day decision-making, while independent audit reports could provide a legally verified benchmark, ensuring accuracy and comparability across the ecosystem.

In this way, creators and publishers would not be left guessing whether their contributions are valued. They would receive actionable insight for their business models and formal accountability. Both layers together would embed provenance integrity into the system: visibility for creators, traceability for regulators and transparency for the public. 

“Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources.”

Enforcement could mix rewards and penalties. On the reward side, platforms that show where their information comes from and that help fund important public information resources could get benefits such as tax credits or lighter legal risk. On the penalty side, platforms that ignore these integrity rules could face growing fines, similar to the antitrust penalties we already see in the EU.

This is where the three dimensions come together: information provenance integrity in how sources are cited, economic integrity of information flows in how value is shared and the integrity of the shared information commons in how open resources are sustained.

Artificial Integrity for platforms that deliver AI-generated answers represents more than a set of technical fixes. By reframing AI-mediated information search not as a question of feature tweaks but as a matter of design, code and governance in AI products, it addresses a necessary rebalancing toward a fairer and more sustainable distribution of value on which the web depends, now and in the future.

The post The AI-Powered Web Is Eating Itself appeared first on NOEMA.

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

‘Cat’ Is a Purr-fect Celebration of Felines in Art Throughout the Centuries

1 Share
‘Cat’ Is a Purr-fect Celebration of Felines in Art Throughout the Centuries

In 1835, a tortoiseshell cat measuring more than three feet long was enough to warrant a small advertisement in a British newspaper that as “the greatest curiosity ever shown to the public,” it could be viewed at the Ship Tavern in London. Surely a pint of ale was the informal fee to view this extraordinary animal.

It was during the 18th and 19th centuries in Europe that cats became increasingly recognized as worthy pets, beyond their role as mousers. Breweries and distilleries often still “employ” a cat or two to keep the rodents out of the grain. From supernatural kaibyō in Japanese folklore to felines’ divine status in ancient Egypt, the animals have had an indelible influence on mythology, history, and our daily lives for a very long time.

A digital illustration by Xuan Loc Xuan of a white cat walking through nasturtiums
Xuan Loc Xuan, “Nasturtium Cat” (2023), digital painting, 9 7/8 × 11 3/8 inches. Image courtesy of the artist

Forthcoming from Phaidon, the book Cat celebrates, well, exactly what you’d expect. From contemporary sculpture and illustrations to early photography and internet memes, the volume runs the gamut of feline personalities and depictions in art throughout the millennia. Yet no matter how diverse the portrayals or how long ago they were created, the creatures’ expressiveness—even ridiculousness—is universally relatable.

Cat surveys an immense range of mediums and eras, from medieval illuminated manuscripts to modern street art. Colossal readers may be familiar with artists like Xuan Loc Xuan, Lee Sangsoo, and Utagawa Hiroshige, among many others, whose multimedia explorations of feline nature fill the playful tome.

Slated for release on February 11, Cat is available for pre-order in the Colossal Shop.

A cartoonish drawing of a blue-black cat by Bill Traylor
Bill Traylor, Untitled (Midnight Blue Cat) (c. 1939–42), poster paint on found cardboard, 11 × 8 inches. Image © Bill Traylor Family Inc. – WhosBillTraylor.com: Ricco/Maresca Gallery
An illustration by Hiroshige of a white, tailless cat with a ribbon around its neck, playing with another ribbon, set against a green background
Utagawa Hiroshige II, “A White Cat Playing with a String” (1863), woodcut, 8 3/8 × 10 1/2 inches. Image courtesy of the Minneapolis Institute of Art
A painting by Sally J. Han of a young woman sleeping in a colorful bed with a cat by her head
Sally J. Han, “Nap” (2022), acrylic paint on paper mounted on wood panel, 24 × 30 inches. © Sally J. Han. Photo by Jason Mandella
A 19th-century illustration of a tabby cat by Nathaniel Currier
Nathaniel Currier, “The Favorite Cat” (1838–48), hand-colored lithograph, 12 1/4 × 8 5/8 inches. Image courtesy of The Metropolitan Museum of Art
An oil painting by Jodie Niss of a cat slumped and sleeping comically in a corner by a mirror
Jodie Niss, Untitled (#2) (2022), oil on wood panel, 16 × 12 inches. Image courtesy of the artist
An array of 90 cat figurines, part of a multimedia artwork by Andy Holden
Andy Holden, “Cat-tharsis” (2022), 90 cat figurines and HD video with music by The Grubby Mitts, 17 minutes. Image courtesy of the artist and Charles Moffett, New York. Photo by Thomas Barratt

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $7 per month. The article ‘Cat’ Is a Purr-fect Celebration of Felines in Art Throughout the Centuries appeared first on Colossal.

Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

From director Rian Johnson, a collection of some of the screenplays of...

1 Share
From director Rian Johnson, a collection of some of the screenplays of his movies & TV shows, including Wake Up Dead Man, Knives Out, Glass Onion, Looper, and the Poker Face pilot. “Print them, share them, act them out with your friends.”
Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete

What Would Richard Feynman Make of AI Today?

1 Share

The first principle is that you must not fool yourself—and you are the easiest person to fool,” Richard Feynman said in a 1974 commencement address at Caltech. He wasn’t speaking as a lofty philosopher but as a working physicist, offering a practical guide to daily work. He had little patience for prestige, authority, or explanations that couldn’t be tested. “It doesn’t matter how beautiful your theory is, how smart you are, or what your name is,” he liked to say. “If it doesn’t agree with experiment, it’s wrong. In that simple statement is the key to science.” Students often giggled at first, but then became silent as it sank in.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Feynman was a man of contrasts—lively, irreverent, and deeply suspicious of explanations that sounded good but didn’t cash out in practice. Instead, he emphasized curiosity and intolerance for nonsense. And when things got too stuffy, he preferred playing the bongo drums. Feynman had a strong instinct to understand things by doing them, not by reading about them. Just as with physics, he didn’t want descriptions, he wanted participation. Curiosity didn’t need justification. And yes, he won the Nobel Prize in Physics. He invented a visual way to understand how light and matter interact—diagrams that let physicists see complex processes at a glance.

As a teenager, Feynman repaired radios without schematics. In his last act as a public figure, he exposed the cause of the 1986 Space Shuttle Challenger disaster. Despite being ill with cancer, he cut through NASA’s flawed reasoning, insisted on speaking to engineers rather than administrators, and demonstrated O-ring failure with a simple glass of ice water on live television. In his mind, fixing radios and explaining the Challenger disaster were the same problem. In both cases, authority had obscured reality, and a simple experiment was enough to reveal it. That way of thinking—forged long before machine learning and neural networks—turns out to be uncomfortably relevant today.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You can imagine Feynman being tempted to rise and ask a deceptively simple question: How do you know?

If Feynman were alive and wandering through our technological landscape, it’s hard to imagine him standing on a stage at an AI product launch. He disliked hype. He was suspicious of grand promises delivered before the details were understood—of applause standing in for questions. Instead of unveiling a finished product, he’d likely say something like, “I don’t really know what this thing does yet—that’s why I’m interested in it.” He might take the demo apart and break it, before fixing it and putting it back together. That alone would drain the room of hype and dampen the mood of anyone hoping for a smooth pitch, such as investors and stakeholders.

It is easier to imagine him sitting in the last row of a darkened auditorium, notebook in hand, watching carefully. On the screen, colorful animations glide past: neural networks glowing, data flowing, arrows pointing confidently upward, unencumbered by error bars, a demo that works beautifully, provided nothing unexpected happens. A speaker explains that the system “understands language,” “reasons about the world,” “discovers new knowledge.” Each claim is met with nods and polite applause. You can see Feynman being tempted to rise and ask a deceptively simple question: How do you know?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But Feynman, new to this spectacle, would wait. He would listen for the moment when someone explained what the machine does when it fails, or how one might tell whether it truly understands anything at all. He would notice that the demo works flawlessly—once—and that no one asks what happens when the input is strange, incomplete, or wrong. He would hear words doing a great deal of work, and experiments doing very little.

UTTER HONESTY: In his 1974 commencement speech at Caltech, Richard Feynman told students scientific integrity depends on “utter honesty.” Experiments should “try to give all of the information to help others to judge the value of your contribution; not just the information that leads to judgment in one particular direction or another.” Credit: Wikimedia Commons.

Artificial intelligence is being presented to the public as a transformative force—one that promises to revolutionize science, medicine, education, and creativity itself. In many ways, these claims are justified. Machine learning systems can detect patterns at scales no human could manage: predicting the three-dimensional structures of proteins, screening images of tissue and cells for changes, identifying rare astronomical signals buried in noise, and generating fluent text or images on demand. These systems excel at sifting through oceans of data with remarkable speed and efficiency, revealing regularities that would otherwise remain hidden.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Feynman would not have dismissed any of this. He was fascinated by computation and simulation. He helped pioneer Monte Carlo methods (simulating many possible outcomes at random and averaging the results) at Los Alamos National Laboratory, and computational approaches to quantum mechanics. Used well, AI can help scientists ask better questions, explore larger parameter spaces, and uncover patterns worth investigating. Used poorly, it can short-circuit that process—offering answers without insight, correlations without causes, and predictions without understanding. The danger is not automation itself, but the temptation to mistake impressive performance for understanding.

Much of today’s artificial intelligence operates as a black box. Models are trained on vast—often proprietary—datasets, and their internal workings remain opaque even to their creators. Modern neural networks can contain millions, sometimes billions, of adjustable parameters. One of Feynman’s contemporaries, John von Neumann, once wryly observed: “With four parameters I can fit an elephant, and with five I can make his tail wiggle.” The metaphor warns of mistaking noise for meaning. Neural networks produce outputs that look fluent, confident, sometimes uncannily insightful. What they rarely provide is an explanation of why a particular answer appears, or when the system is likely to fail.

This creates a subtle but powerful temptation. When a system performs impressively, it is easy to treat performance as understanding, and statistical success as explanation. Feynman would have been wary of that move. He once scribbled on his blackboard, near the end of his life, a simple rule of thumb: “What I cannot create, I do not understand.” For him, understanding meant being able to take something apart, to rebuild it, and to know where it would break. Black-box systems invert that instinct. They invite us to accept answers we cannot fully reconstruct, and to trust results whose limits we may not recognize until something goes wrong.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Feynman had a name for this kind of confusion: “cargo cult science.” In fact, that was the title of his 1974 commencement address. He described cargo cult science as research that imitates the outward forms of scientific practice—experiments, graphs, statistics, jargon—-while missing its essential core.

The term came from South Pacific islanders who, after World War II, built wooden runways and bamboo control towers in the hope that cargo planes would return. They reproduced the rituals they had observed, down to carved headphones and signal fires. “They follow all the apparent precepts,” Feynman said, “but they’re missing something essential, because the planes don’t land.” The lesson was not about foolishness, but about misunderstanding. Without knowing why something works, copying its surface features is not enough.

Feynman’s message was not that science produces miracles but teaches a way of thinking that resists dogma.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The risk with AI is not that it doesn’t work. The risk is that it works well enough to lull us into forgetting what science is for: not producing answers but exposing ideas to reality. For Feynman, science was about learning precisely where ideas fail. When performance becomes the goal, and success is measured only by outputs that look right, that discipline quietly erodes.

He loved technology and new tools, especially those that made it easier to test ideas against reality. He created visual tools, like his Nobel-Prize-winning diagrams, that simplified complex interactions without hiding their assumptions. But he was always careful to distinguish between instruments that help us probe nature and systems that merely produce convincing answers. Tools, for Feynman, were valuable not because they were powerful, but because they made it easier to see where an idea broke.

In Feynman’s view, science does not advance through confidence, but through doubt, by a willingness to remain unsure. Scientific knowledge, he argued, is a patchwork of statements with varying degrees of certainty—all provisional, all subject to revision. “I would rather have questions that can’t be answered,” Feynman said, “than answers that can’t be questioned.” This is in stark contrast to venture capital, which rewards bold claims. Corporate competition rewards speed. Media attention rewards spectacle. In such an environment, admitting uncertainty is costly.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

But for Feynman, uncertainty was not a weakness, it was the engine of progress. “I think it’s much more interesting to live not knowing, than to have answers which might be wrong,” he said.

It’s tempting to think these concerns belong only to academics. But artificial intelligence is no longer confined to laboratories or universities. It shapes what people read and watch, how students are assessed, how medical risks are flagged, and how decisions are made about loans, jobs, or insurance.

In many of these settings, AI systems function less as tools than as institutional opacity—systems whose authority exceeds our ability to question them. Their outputs arrive with an air of objectivity, even when the reasoning behind them is clouded. When a recommendation is wrong, or a decision seems unfair, it is often difficult to know where the error lies—in the data, the model, or the assumptions embedded long before the system was deployed?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In such contexts, the discipline of not fooling ourselves becomes more than an academic virtue. When opaque systems influence real lives, understanding their limits—knowing when they fail and why—becomes a civic necessity. Trust depends not only on performance, but on accountability, whether their limits are understood, questioned, and made visible when it matters.

Stepping back, the issue is not whether AI will transform science. It already has. The deeper question is whether we can preserve the values that make scientific knowledge trustworthy in the first place. Feynman’s answer, were he here, would likely be characteristically simple: slow down, ask what you know, admit what you don’t, and never confuse impressive results with understanding. History has shown more than once that scientific knowledge can race ahead of wisdom. Several physicists of Feynman’s generation would later reflect on having learned what could be done long before learning what should be done—a realization that arrived too late for comfort.

In 1955, Feynman gave a talk called “The Value of Science” at a National Academy of Sciences meeting at Caltech. He said that science is a discipline of doubt, remaining free to question what we think we know. His central message was not that science produces miracles but teaches a way of thinking that resists dogma, false certainty, and self-deception. He opened not with equations or authority, but with a Buddhist proverb: “To every man is given the key to the gates of heaven; the same key opens the gates of hell.” Science, Feynman said, is that key, a tool of immense power. It can open both gates. Which way it turns depends on you.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

More on Richard Feynman from Nautilus

What Impossible Meant to Feynman

The Day Feynman Worked Out Black-Hole Radiation on My Blackboard

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Another Side of Feynman

Lead image: Tamiko Thiel / Wikimedia Commons

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .
Read the whole story
mrmarchant
21 hours ago
reply
Share this story
Delete
Next Page of Stories