1389 stories
·
1 follower

People Should Pay More Attention to Gina Wilson

1 Share

A few years ago a teacher in my school brought in a curriculum they’d bought online called “All Things Algebra,” created by Gina Wilson. It quickly spread from teacher to teacher. I considered it a niche product, just another thing being sold on Teachers Pay Teachers. Then I was talking to someone at a different school—their colleagues used it too.

It took me a while to realize just how popular these materials were—or how popular they must be, given how often I hear about them. And yet when people talk about the direction that American math teaching is going, Gina Wilson’s name never comes up. She must be the most influential person in math education that nobody knows anything about.

Gina Wilson, taken from her “About Me.”

Because it comes from the bottom up, it’s hard to measure Wilson’s influence. Even now, I’m doubting myself—have I overstated the case? There are numbers, though: on Teachers Pay Teachers, she has over 167,000 happy customers.

We don’t talk about Teachers Pay Teachers much anymore, do we? In its heyday in the early-2010s it was the paywalled, cutesy outpost of a freewheeling ecosystem of teacher blogs. Natasha Singer covered it in 2015 for the NYTimes:

As some on the site develop sizable and devoted audiences, TeachersPayTeachers.com is fostering the growth of a hybrid profession: teacher-entrepreneur. The phenomenon has even spawned its own neologism: teacherpreneur.

“Teacherpreneur”—that’s quite a word.

While it’s tricky to figure out how widely Wilson’s curriculum has spread, we can get a bit of a hint from survey results. American Instructional Resources Surveys is a carefully designed survey performed by researchers at RAND that tries to shed light on what resources teachers are using to teach math, ELA, and science. They ask about textbooks and curricula, but also about supplementary materials.

In the most recent results, a full 43% of respondents said they use resources from Teachers Pay Teachers. Those aren’t all using Gina Wilson’s materials, but a lot, maybe most of them are.

From the 2024 AIR tables

Who is Gina Wilson? There is remarkably little public information about her. There is a blogpost from 2013 and a brief interview from 2014. She grew up outside of Buffalo, NY and is a huge Bills fan. In 2006 she graduated college, followed her boyfriend to Virginia Beach, and began teaching Grade 8 mathematics at Great Neck Middle School.

Like so many of us, Gina didn’t like the textbook she was assigned and began searching for materials. She was an avid Pinterester, collecting the free materials that others shared online. At some point, she began making her own sheets and posting them on Teachers Pay Teachers, where she found a following.

What happened next isn’t clear. But now, “All Things Algebra” is a full-fledged curriculum company. It offers full curricula from Grades 7 through Precalculus. How many people work for this company? I don’t know. Based on the pictures Wilson has posted on Instagram, she seems to write these materials entirely on her own, scanning textbooks to find and modify questions for her own sheets. Licenses for her course materials cost several hundred dollars.

Wilson is no longer listed as an employee of Great Neck Middle School. Doing a bit of quick math—hundreds of thousands of customers, hundreds of dollars per course—she must be f***ing rich. I assume she, uhh, is no longer hanging out with 8th Graders? If she is, respect.

What can we say about the quality of Wilson’s materials? Whatever—that’s actually the wrong question, because it doesn’t matter. What’s more interesting is why they’re so popular, and that’s clear to any teacher who’s seen them: they are ready to print and use, with answer keys and plenty of white space after each question.

PLENTY of white space. Oh my god. The hours I’ve lost to white space. There is currently a nationwide effort to get teachers to actually use vetted curricula—“high-quality instructional materials”—and I swear that if people were actually serious about this all they’d have to do would be to ADD WHITE SPACE to these things and make them READY TO PRINT. It doesn’t matter how good your textbook is if it’s annoying to use. The powers that be just don’t get this, but you know who does? Gina Wilson.

There are other things that Gina Wilson understands. Her curricular materials are simple and straightforward, with common structures and routines. The answer keys are important to many teachers. (Not me, but I’m a mess, figuring out answers in realtime.) When there are variations on a typical lesson—activities like “Madlibs” or “Relay Races”—they don’t ask kids to use different mathematics but vary the surface-level activity. I don’t love this personally, because I’m boring and scared of cuteness, but I get that most teachers like these twists on the daily grind.

Last summer, Sarah Schwartz wrote for Education Week about the RAND survey. “Regardless of state or district requirements,” she wrote, “teachers mix and match the lessons and resources they use in their classrooms.” But this reporting doesn’t dig into the actual nature of these supplemental resources, their qualities, or the people who make them. No reporting does, as far as I can tell.

The biggest mistake people make in following education is assuming that classrooms reflect the official requirements. People—even people who should know better—read position papers and state mandates and say things like, “New Jersey is moving in the wrong direction!” or “Finally we’re seeing some changes in Suffolk County.”

If there is anything to understand about education, it’s this: learning happens in classrooms with teachers and little oversight, except from overstretched administrators who, often despite their best efforts, cannot keep track of everything going on in their schools.

43%—that’s a very significant number. That’s a lot of space. It’s the space between curriculum and the classroom. It’s where ed school professors lose their influence, where consultants can’t reach, where state initiatives fizzle away into nothing.

American education will either live with this space, learn to engineer around it, or try to destroy it. I wouldn’t be shocked if, some years from now, the latest technology is used to automate oversight—maybe the next gen textbook will be watching the teacher. But until then, or maybe until that fails, this space will exist—right now it belongs to people like Gina Wilson, and without making much of a fuss, she’s dominating it.

Read the whole story
mrmarchant
32 minutes ago
reply
Share this story
Delete

Computers can’t surprise

1 Share

Photo of two silhouettes walking by fountains with bokeh effect in the foreground, Arc de Triomphe in the background.

As AI’s endless clichés continue to encroach on human art, the true uniqueness of our creativity is becoming ever clearer

- by Richard Beard

Read on Aeon

Read the whole story
mrmarchant
41 minutes ago
reply
Share this story
Delete

You’re in a computer literacy filter bubble

1 Share
You’re in a computer literacy filter bubble

Skill-testing question: what’s wrong with this snippet?

user@host:~$ rm $FILENAME

You’re still reading. That means it doesn’t look scary. Many of you have already jumped to: this is a code snippet > this is a terminal command > this is probably Bash or POSIX shell > it’s deleting a file > $ doesn’t mean the file is expensive, it means $FILENAME is a variable > the variable is unquoted (!) > this snippet demonstrates one of the most basic and dangerous Bash footguns (see appendix)

You’re still reading. Maybe you understood the above paragraph, or maybe not, but even if you didn’t, you were interested enough to try to understand it, and the fact you’re here means you can fire up a modern computing device, put in a password, connect to the internet, browse to an obscure tech blog, scroll, and absorb strange words into your brain.

You are not normal.

Someone I know started teaching a podcasting course at a better-than-average high school in one of the richest parts of one of the richest countries. After his first day, he told me, with wide eyes, some of the students don’t know how to move or copy a file.

I was shocked. I thought we were the Luddites, and younger generations were running circles around us in the tech literacy department. Some of them are. And most of them are able to operate smartphones. But, copying a file? This is not something human beings are born knowing how to do, and, I now know, not everyone learns how in the natural course of their upbringing.

(Taking what we know for granted plagues all fields, and there’s a name for it: the curse of knowledge. And, of course, a relevant XKCD.)

My technical literacy filter bubble is Hacker News and Lobsters, where it seems like everyone is smarter and more hardcore than I am. I’ve dabbled in Python, Go, Rust, Bash, C, PHP, and JavaScript, but I don’t feel like a real programmer, because I mostly write scripts and websites. Real programmers are the ones that write compilers and desktop applications in Lisp and Zig, and run BSD or QubesOS. Real programmers are the people that are eternally one step ahead of where I am.

If your mother uses Linux, you’re probably in a computer literacy filter bubble.

If you have happy memories of booting from a treasured Live CD and playing Nibbles for Knoppix on a CRT monitor, and both parents programmed mainframes using punch-cards and FORTRAN, and you prefix anything than involves the terminal with “just”, you’re definitely in a computer literacy filter bubble.

Filter bubbles aren’t inherently bad*.* And there are countless filter bubbles that we aren’t in. I’m not in the mechanical competence filter bubble, and I bet most of us aren’t in the fashion filter bubble. Even if we wanted to, I’m not sure it’s possible to fully escape a tech literacy filter bubble while being tech literate and surrounding yourself with people who are more tech literate. But I think it’s healthy to deliberately poke our heads out of whatever bubbles we’re in once in a while, take in the view, and realize how different it is out there.

Those of us reading, and writing, blogs that routinely talk about writing shell scripts to save keystrokes, measuring brainwaves, preventing servers from crashing, and modifying the software running on embedded devices are so out of touch with the median level of technological literacy, we forget that just knowing how to double-click is a privilege.

Note: this post is part of #100DaysToOffload, a challenge to publish 100 posts in 365 days. These posts are generally shorter and less polished than our normal posts; expect typos and unfiltered thoughts! View more posts in this series.

Read the whole story
mrmarchant
16 hours ago
reply
Share this story
Delete

Do Commodities Get Cheaper Over Time?

1 Share

This American Enterprise Institute chart, which breaks down price changes for different types of goods and services in the consumer price index, has by now become very widely known. A high-level takeaway from this chart is that labor-intensive services (education, healthcare) get more expensive in inflation-adjusted terms over time, while manufactured goods (TVs, toys, clothing) get less expensive over time.

But there are many types of goods that aren’t shown on this chart. One example is commodities: raw (or near-raw) materials mined or harvested from the earth. Commodities have many similarities with manufactured goods: they’re physical things that are produced (or extracted) using some sort of production technology (mining equipment, oil drilling equipment), and many of them will go through factory-like processing steps (oil refineries, blast furnaces). But commodities also seem distinct from manufactured goods. For one, because they’re often extracted from the earth, commodities can be subject to depletion dynamics: you run out of them at one location, and have to go find more somewhere else. In my book I talk about how iron ore used to be mined from places like Minnesota, but as the best deposits were mined out steel companies increasingly had to source their ore from overseas. And the idea of “Peak Oil” is based on the idea that society will use up the easily accessible oil, and be forced to obtain it from increasingly marginal, expensive-to-access locations.

(Some commodities, particularly agricultural commodities that can be repeatedly grown on a plot of land, don’t have the same sort of depletion dynamics, though bad farming practices can degrade a plot of land over time. Other commodities get naturally replenished over time, but can still get used up if the rate of extraction exceeds the rate of replenishment; non-farmed timber harvesting and non-farmed commercial fishing come to mind as examples.)

Going into this topic, I didn’t have a great sense of what price trends look like for commodities in general. Julian Simon famously won a 1980 bet with Paul Ehrlich that several raw materials — copper, chromium, nickel, tin, and tungsten — would be cheaper (in inflation-adjusted terms) after 10 years, not more expensive. But folks have pointed out that if the bet had been over a different 10-year window, Ehrlich would have won the bet.

To better understand how price tends to change for different commodities and raw materials, I looked at historical prices for over a hundred different commodities. Broadly, agricultural commodities tend to get cheaper over time, while fossil fuels have a slight tendency to get more expensive. Minerals (chemicals, metals, etc.) have a slight tendency towards getting cheaper, with a lot of variation — 15 minerals more than doubled in price over their respective time series. But this has shifted over the last few decades, and recently there’s been a greater tendency for commodities to rise in price.

Analyzing commodity prices

To get long-term commodity prices, I used a few different sources of data. For agricultural products, I used U.S. Department of Agriculture data, which has price data for various crops going back (in some cases) to the 19th century, and for various meats going back to 1970. For minerals, I used U.S. Geological Survey data, which gives historical statistics (including prices) for several dozen minerals, metals, and chemicals. For fossil fuels, I used the Statistical Review of World Energy, this dataset from Jessika Trancik’s lab, and datasets from the U.S. Energy Information Administration. Altogether, I looked at 124 different commodities.

Let’s start by looking at fossil fuels. The graphs below show the price of oil, natural gas, and bituminous coal in 2024 dollars.

The most visible pattern here is the large number of price spikes. The 1970s energy crisis, where prices rose by a factor of three or more and then declined, is the most obvious, but it’s far from the only one — there’s another huge spike in the early 2000s. There’s some tendency for fossil fuel prices to rise long-term, particularly post-energy crisis, but there’s a lot of variation. The price of natural gas, for instance, has generally been declining since the early 2000s, and all fossil fuels have long periods of time over which their price declines. In addition to its post-2000s decline, the price of natural gas declined from the 1920s through the 1940s, and the price of oil generally declined over the 100-year period from the late 1860s through the early 1970s.

Now let’s look at agricultural commodities. The graphs below show the inflation-adjusted price for 25 different crops grown in the U.S.

Here we see the same year-to-year variation in price as we do in fossil fuels, but unlike fossil fuels there’s also a very strong tendency for agricultural commodities to fall in price over time. 24 of the 25 crops examined have lower inflation-adjusted prices today than at the beginning of their time series (tobacco is the single exception). For 17 out of 25, the price decline is greater than 50%.

If we just look at recent price trends, the trend of falling prices is less strong, but still there. 20 of the 25 crops are cheaper today than they were in 1990. (Barley, oats, rye, and Durum wheat are more expensive. The sweet potato dataset stops in 2017, but the price was lower that year than in 1990.)

However, if you look just since 2000, the trend has reversed: only four crops (cotton, peanuts, tobacco, and sweet potatoes) fell in price in real terms since then.

What about meat? The graph below shows the inflation-adjusted price for pork, beef, and chicken over time in the U.S.

The price of chicken has declined since 1980. The price levels of beef and pork declined from 1970 to the mid-1990s, but since then they have been rising. The price of pork is up 15% since 1995, and the price of beef is up 41%.

Now let’s look at minerals. The graphs below show the price of 93 different mineral commodities — industrial metals like aluminum and steel, precious metals like gold and platinum, chemicals like nitrogen and hydrogen, and various other minerals such as graphite, bentonite, gypsum.

And here’s high-value minerals:

We see a range of different price trends here: some minerals (gold, platinum, molybdenum) have risen in price over time, while others (aluminum, gypsum, magnesium) have gotten cheaper. But at a high level, the trend is towards commodities getting cheaper over time. Of these 93 mineral commodities, 60 of them got cheaper between the beginning and end of their time series. In 36 of the 93, the price decline was greater than 50%, and in 10 of them the decline was greater than 90%.

As with agricultural commodities, this trend has gotten weaker recently. For the 75 commodities which have data over the 1990 to 2020 period, only 39 (slightly more than half of them) have gotten cheaper over that period; the other 36 have gotten more expensive.

So at a high level, historically most commodities were getting cheaper over time, but in recent decades this has been significantly less true. Beef, pork, oil, natural gas, copper, construction sand, and phosphate for fertilizer are all commodities that formerly were consistently falling in price but for the last several decades have been getting more expensive.

Another way to get a sense of what commodity price trends generally look like is to do something similar to the Simon-Erlich wager and look at aggregate price changes over particular windows of time. To do this, I divided the dataset for each commodity into 20-year chunks (starting from 1860), and calculated an equivalent annual rate of change over each window. So for iron ore, which has prices from 1900 through 2021, there would be six price windows: 1900 to 1919, 1920 to 1940, and so on. The price for iron ore was $47.45 per ton in 1900 and $33.02 in 1919, giving an equivalent annual change of about -2% for that window.1 This gave me over 600 different commodity price changes, which are shown on the histogram below:

Looking at all commodities together, we can see the overall tendency for prices to decline over 20-year periods of time, but plenty of periods where commodities rise in price. You can see this from the left-skew on the graph, indicating a greater tendency towards price declines than price rises. This tendency towards declining prices is true for each category of commodity except fossil fuels, which have a very slight tendency to rise in price over time.

However, if we just look at the most recent window of time, 2000 to 2020, we instead see a right-skew, a tendency towards rising prices:

Conclusion

To sum up: historically commodities have generally fallen in price over time, but recently this trend has increasingly shifted towards rising prices. Natural gas and oil got cheaper until the 1950s and the 1970s, respectively, and since then have gotten more expensive. Beef and pork both got cheaper from 1970 until the 1990s, and since then have risen in price. Agricultural products were almost uniformly falling in price until around 2000, and have almost uniformly risen in price since then.

My general sense looking at historical commodity price data is that the more that production of some commodity looks like manufacturing — produced by a repetitive process that can be steadily improved and automated, from a supply that can be scaled up in a relatively straightforward fashion, without being subject to severe depletion dynamics — the more you’ll tend to see prices fall over time. The biggest decline in price of any commodity I looked at is industrial diamonds, which fell in price by 99.9% between 1900 and 2021d ue to advances in lab-grown diamonds production. This effectively replaced mined diamonds with manufactured ones for industrial uses; roughly 99% of industrial diamonds today are synthetic. Many other commodities had major price declines that were the result of production process improvements — aluminum got cheaper thanks to the invention (and subsequent improvements) of the Hall-Heroult smelting process, titanium’s price declined following the introduction of the Kroll process, and so on. (Steel also got much cheaper following the introduction of the Bessemer process, but that predates USGS price data.) And of course agriculture, which has evolved from crops being harvested manually to being harvested with highly automated, continuous process machinery, closely mirrors the sorts of process improvements we see in manufacturing.

Of course, this trend alone can’t explain changes in commodity prices over time, and there are plenty of commodities — steel, cement, silicon — that are produced in a manufacturing-type operation but which haven’t seen substantially declining prices over their history. And even commodities which resemble manufactured goods have risen in price recently. More generally, there are plenty of things that can shift supply and demand curves to the right or left: cartels, national policies, a spike or collapse in demand, and so on. But the question of “how much, over time, does the production of this commodity resemble a manufacturing process?” seems like a useful lens on understanding the dynamics of commodity prices.

1

Because this calculation might depend on what the boundaries of the price window are, I did a sensitivity analysis by starting at a few different years and seeing how the outcome changed. Changing the window of time doesn’t change the results.

Read the whole story
mrmarchant
23 hours ago
reply
Share this story
Delete

The AI-Powered Web Is Eating Itself

1 Share

Suppose you’re craving lasagna. Where do you turn for a recipe? The internet, of course.

Typing “lasagna recipe ideas” into Google used to surface a litany of food blogs, each with its own story: a grandmother’s family variation, step-by-step photos of ingredients laid out on a wooden table, videos showing technique and a long comment section where readers debated substitutions or shared their own tweaks. Clicking through didn’t just deliver instructions; it supported the blogger through ads, affiliate links for cookware or a subscription to a weekly newsletter. That ecosystem sustained a culture of experimentation, dialogue and discovery.

That was a decade ago. Fast forward to today. The same Google search can now yield a neatly packaged “AI Overview,” a synthesized recipe stripped of voice, memory and community, delivered without a single user visit to the creator’s website. Behind the scenes, their years of work, including their page’s text, photos and storytelling, may have already been used to help train or refine the AI model.

You get your lasagna, Google gets monetizable web traffic and for the most part, the person who created the recipe gets nothing. The living web shrinks further into an interface of disembodied answers, convenient but ultimately sterile.

This isn’t hypothetical: More than half of all Google searches in the U.S. and Europe in 2024 ended without a click, a report by the market research firm SparkToro estimated. Similarly, the SEO intelligence platform Ahrefs published an analysis of 300,000 keywords in April 2025 and found that when an AI overview was present, the number of users clicking into top-ranked organic search results plunged by an average of more than a third.

Users are finding their questions answered and their needs satisfied without ever leaving the search platform.

Until recently, an implicit social contract governed the web: Creators produced content, search engines and platforms distributed it, and in return, user traffic flowed back to the creators’ websites that sustained the system. This reciprocal bargain of traffic in exchange for content underwrote the economic, cultural and information-based fabric of the internet for three decades.

Today, the rise of AI marks a decisive rupture. Google’s AI Overviews, Bing’s Copilot Search, OpenAI’s ChatGPT, Anthropic’s Claude, Meta’s Llama and xAI’s Grok effectively serve as a new oligopoly of what are increasingly being called “answer engines” that stand between users and the very sources from which they draw information.

This shift threatens the economic viability of content creation, degrades the shared information commons and concentrates informational power.

To sustain the web, a system of Artificial Integrity must be built into these AI “answer engines” that prioritizes three things: clear provenance that consistently makes information sources visible and traceable, fair value flows that ensure creators share in the value even when users don’t click their content and a resilient information commons that keeps open knowledge from collapsing behind paywalls.

In practical terms, that means setting enforceable design and accountability guardrails that uphold integrity, so AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.

Ruptured System

AI “answer engines” haven’t merely made it easier to find information, they have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.

AI companies have harvested and utilized the creative labor of writers, researchers, artists and journalists to train large language models without clear consent, attribution or compensation. The New York Times has filed lawsuits against OpenAI and Microsoft, alleging that the tech giants used its copyrighted articles for this purpose. In doing so, the news organization claims, they are threatening the very business model of journalism.

In fact, AI threatens the business model of digital content creation across the board. As publishers lose traffic, there remains little incentive for them to keep content free and accessible. Instead, paywalls and exclusive licensing are increasingly the norm. This will continue to shrink the freely available corpus of information upon which both human knowledge and future AI training depend.

The result will be a degraded and privatized information base. It will leave future AI systems working with a narrower, more fragile foundation of information, making their outputs increasingly dependent on whatever remains openly accessible. This will limit the diversity and freshness of the underlying data, as documented in a 2024 audit of the “AI data commons.” 

“The living web is shrinking into an interface of disembodied answers, convenient but ultimately sterile.”

At the same time, as more of what is visible online becomes AI-generated and then reused in future training, these systems will become more exposed to “model collapse,” a dynamic documented in a 2024 Nature study. It showed that when real data are replaced by successive synthetic generations, the tails of the original distribution begin to disappear as the model’s synthetic outputs begin to overwrite the underlying reality they were meant to approximate. 

Think of it like making a photocopy of a photocopy, again and again. Each generation keeps the bold strokes and loses the faint details. Both trends, in turn, weaken our ability to verify information independently. In the long run, this will leave people relying on systems that amplify errors, bias and informational blind spots, especially in niche domains and low-visibility communities.

Picture a procurement officer at a mid-sized bank tasked with evaluating vendors for a new fraud-detection platform. Not long ago, she would have likely turned to Google, LinkedIn or industry portals for information, wading through detailed product sheets, analyst reports and whitepapers. By clicking through to a vendor’s website, she could access what technical information she might need and ultimately contact the company. For the vendor, each click also fed its sales pipeline. Such traffic was not incidental; it was the lifeblood of an entire ecosystem of marketing metrics, job underwriting, marketing campaigns and specialized research.

These days, the journey looks different. A procurement officer’s initial query would likely yield an AI-generated comparison condensing the field of prospects into a few paragraphs: Product A is strong on compliance; product B excels at speed; product C is cost-effective. Behind this synthesis would likely lie numerous whitepapers, webinars and case studies produced by vendors and analysts — years of corporate expertise spun into an AI summary.

As a result, the procurement officer might never leave the interface. Vendors’ marketing teams, seeing dwindling click-driven sales, might retreat from publishing open materials. Some might lock reports behind steep paywalls, others might cut report production entirely and still others might sign exclusive data deals with platforms just to stay visible.

The once-diverse supply of open industry insight would contract into privatized silos. Meanwhile, the vendors would become even more dependent on the very platforms that extract their value.

Mechanisms At Play

The rupture we’re seeing in the web’s economic and informational model is driven by five mutually reinforcing mechanisms that determine what content gets seen, who gets credited and who gets paid. Economists and product teams might call these mechanisms intent capture, substitution, attribution dilution, monetization shifts and the learning loop break

Intent capture happens when the platform turns an online search query into an on-platform answer, keeping the user from ever needing to click the original source of information. This mechanism transforms a search engine’s traditional results page from an open marketplace of links essentially into a closed surface of synthesized answers, narrowing both visibility and choice. 

Substitution, which takes place when users rely on AI summaries instead of clicking through to source links and giving creators the traffic they depend on, is particularly harmful. This harm is most pronounced in certain content areas. High substitution occurs for factual lookups, definitions, recipes and news summaries, where a simple answer is often sufficient. Conversely, low substitution occurs for content like investigative journalism, proprietary datasets and multimedia experiences, which are harder for AI to synthesize into a satisfactory substitute.

The incentives of each party diverge: Platforms are rewarded for maximizing query retention and ad yield; publishers for attracting referral traffic and subscribers; and regulators for preserving competition, media plurality and provenance. Users, too, prefer instant, easily accessible answers to their queries. This misalignment ensures that platforms optimize for closed-loop satisfaction while the economic foundations of content creation remain externalized and underfunded.

Attribution dilution compounds the effect. When information sources are pushed behind dropdowns or listed in tiny footnotes, the credit exists in form but not in function. Search engines’ tendency to simply display source links, which many do inconsistently, does not solve the issue. These links are often de-emphasized and generate little or no economic value, creating a significant consent gap for content used in AI model training. When attribution is blurred across multiple sources and no value accrues without clicks or compensation, that gap becomes especially acute. 

“AI ‘answer engines’ have ruptured the web’s value loop by separating content creation from the traffic and revenue that used to reward it.”

Monetization shifts refer to the redirected monetary value that now often flows solely to AI “answer engines” instead of to content creators and publishers. This shift is already underway, and it extends beyond media. When content promoting or reviewing various products and services receives fewer clicks, businesses often have to spend more to be discovered online, which can raise customer acquisition costs and, in some cases, prices. 

This shift can also impact people’s jobs: Fewer roles may be needed to produce and optimize web content for search, while more roles might emerge around licensing content, managing data partnerships and governing AI systems. 

The learning loop break describes the shrinking breadth and quality of the free web as a result of the disruptive practices of AI “answer engines.” As the information commons thins, high-quality data becomes a scarce resource that can be controlled. Analysts warn that control of valuable data can act as a barrier to entry and concentrate gatekeeper power.

This dynamic is comparable to what I refer to as a potential “Data OPEC,” a metaphor for a handful of powerful platforms and rights-holders controlling access to high-quality data, much as the Organization of Petroleum Exporting Countries (OPEC) controls the supply of oil.

Just as OPEC can restrict oil supply or raise prices to shape global markets, these data gatekeepers could restrict or monetize access to information used to build and improve AI systems, including training datasets, raising costs, reducing openness and concentrating innovation power in fewer hands. In this way, what begins as an interface design choice cascades into an ecological risk for the entire knowledge ecosystem.

The combined effect of these five mechanisms is leading to a reconfiguration of informational power. If AI “answer engines” become the point of arrival for information rather than the gateway, the architecture of the web risks being hollowed out from within. The stakes extend beyond economics: They implicate the sustainability of public information ecosystems, the incentives for future creativity and the integrity of the informational commons.

Left unchecked, these forces threaten to undermine the resilience of the digital environment on which both creators and users depend. What is needed is a systemic redesign of incentives, guided by the framework of Artificial Integrity rather than artificial intelligence alone.

Artificial Integrity

Applied to the current challenge, Artificial Integrity can be understood across three dimensions: information provenance integrity, economic integrity of information flows and integrity of the shared information commons.

Information provenance integrity is about ensuring that sources are visible, traceable and properly credited. This should include who created the content, where it was published and the context in which it was originally presented. The design principle is transparency: Citations must not be hidden in footnotes. 

Artificial Integrity also requires that citations carry active provenance metadataa verifiable, machine-readable signature linking each fragment of generated output to its original source, allowing both users and systems to trace information flows with the same rigor as a scientific citation. 

That introduces something beyond just displaying source links: It’s a systemic design where provenance is cryptographically or structurally embedded, not cosmetically appended. In this way, provenance integrity becomes a safeguard against erasure, ensuring that creators remain visible and credited even if the user doesn’t click through to the original source.

Economic integrity of information flows is about ensuring that value flows back to creators, not only to platforms. Artificial Integrity requires rethinking how links and citations are valued. In today’s web economy, a link matters only if it is clicked, which means that sources that are cited but not visited capture no monetary value. In an integrity-based model, the very act of being cited in an AI-generated answer would carry economic weight, ensuring that credit and compensation flow even when user behavior stops at the interface.

This would realign incentives from click-chasing to knowledge contribution, shifting the economy from performance-only to provenance-aware. To achieve this, regulators and standards bodies could require that AI “answer engines” compensate not only for traffic delivered, but also for information cited. Such platforms could implement source prominence rules so that citations are not hidden in footnotes but embedded in a way that delivers measurable economic value. 

Integrity of the shared information commons is about ensuring that the public information base remains sustainable, open and resilient rather than degraded into a paywalled or privatized resource. Here, Artificial Integrity calls for mandatory reinvestment of AI platform revenues into open datasets as a built-in function of the AI lifecycle. This means that large AI platforms such as Google, OpenAI and Microsoft would be legally required to dedicate a fixed percentage of their revenues to sustaining the shared information commons. 

“AI platforms cannot keep all the benefits of instant answers while pushing the costs onto creators and the wider web.”

This allocation would be architecturally embedded into their model development pipelines. For example, a “digital commons fund” could channel part of Google’s AI revenues into keeping resources like Wikipedia, PubMed or open academic archives sustainable and up to date. Crucially, this reinvestment would be hardcoded into retraining cycles, so that every iteration of a model structurally refreshes and maintains open-access resources alongside its own performance tuning. 

In this way, the sustainability of the shared information commons would become part of the AI system’s operating logic, not just a voluntary external policy. In effect, it would ensure that every cycle of AI improvement also improves the shared information commons on which it depends, aligning private platform incentives with public information sustainability.

We need to design an ecosystem where these three dimensions are not undermined by the optimization-driven focus of AI platforms but are structurally protected, both in how the platforms access and display content to generate answers, and in the regulatory environment that sustains them.

From Principle To Practice

To make an Artificial Integrity approach work, we would need systems for transparency and accountability. AI companies would have to be required to publish verifiable aggregated data showing whether users stop at their AI summaries or click outward to original sources. Crucially, to protect the users’ privacy, this disclosure would need to include only aggregated interactions metrics reporting overall patterns. This would ensure that individual user logs and personal search histories are never exposed. 

Independent third-party auditors, accredited and overseen by regulators much like accounting firms are today, would have to verify these figures. Just as companies cannot self-declare their financial health but must submit audited balance sheets, AI platforms would no longer be able to simply claim they are supporting the web without independent validation.

In terms of economic integrity of information flows, environmental regulation offers a helpful analogy. Before modern environmental rules, companies could treat pollution as an invisible side effect of doing business. Smoke in the air or waste in the water imposed real costs on society, but those costs did not show up on the polluter’s balance sheet.

Emissions standards changed this by introducing clear legal limits on how much pollution cars, factories and power plants are allowed to emit, and by requiring companies to measure and report those emissions. These standards turned pollution into something that had to be monitored, reduced or paid for through fines and cleaner technologies, instead of being quietly pushed onto the public. 

In a similar way, Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources. An integrity threshold could simply be a clear numerical line, like pollution limits in emissions standards, that marks the point at which an AI platform is taking too much value without sending enough traffic or revenue back to sources. As long as the numbers stay under the acceptable limit, the system is considered sustainable; once they cross the threshold, the platform has a legal duty to change its behavior or compensate the creators it depends on.

This could be enforced by national or regional regulators, such as competition authorities, media regulators or data protection bodies. Similar rules have begun to emerge in a handful of jurisdictions that regulate digital markets and platform-publisher relationships, such as the EU, Canada or Australia, where news bargaining and copyright frameworks are experimenting with mandatory revenue-sharing for journalism. Those precedents could be adapted more broadly as AI “answer engines” reshape how we search online.

These thresholds could also be subject to standardized independent audits of aggregated interaction metrics. At the same time, AI platforms could be required to provide publisher-facing dashboards exposing the same audited metrics in near real-time, showing citation frequency, placement and traffic outcomes for their content. These dashboards could serve as the operational interface for day-to-day decision-making, while independent audit reports could provide a legally verified benchmark, ensuring accuracy and comparability across the ecosystem.

In this way, creators and publishers would not be left guessing whether their contributions are valued. They would receive actionable insight for their business models and formal accountability. Both layers together would embed provenance integrity into the system: visibility for creators, traceability for regulators and transparency for the public. 

“Artificial Integrity thresholds could ensure that the value that AI companies extract from creators’ content comes with financial obligations to those sources.”

Enforcement could mix rewards and penalties. On the reward side, platforms that show where their information comes from and that help fund important public information resources could get benefits such as tax credits or lighter legal risk. On the penalty side, platforms that ignore these integrity rules could face growing fines, similar to the antitrust penalties we already see in the EU.

This is where the three dimensions come together: information provenance integrity in how sources are cited, economic integrity of information flows in how value is shared and the integrity of the shared information commons in how open resources are sustained.

Artificial Integrity for platforms that deliver AI-generated answers represents more than a set of technical fixes. By reframing AI-mediated information search not as a question of feature tweaks but as a matter of design, code and governance in AI products, it addresses a necessary rebalancing toward a fairer and more sustainable distribution of value on which the web depends, now and in the future.

The post The AI-Powered Web Is Eating Itself appeared first on NOEMA.

Read the whole story
mrmarchant
23 hours ago
reply
Share this story
Delete

‘Cat’ Is a Purr-fect Celebration of Felines in Art Throughout the Centuries

1 Share
‘Cat’ Is a Purr-fect Celebration of Felines in Art Throughout the Centuries

In 1835, a tortoiseshell cat measuring more than three feet long was enough to warrant a small advertisement in a British newspaper that as “the greatest curiosity ever shown to the public,” it could be viewed at the Ship Tavern in London. Surely a pint of ale was the informal fee to view this extraordinary animal.

It was during the 18th and 19th centuries in Europe that cats became increasingly recognized as worthy pets, beyond their role as mousers. Breweries and distilleries often still “employ” a cat or two to keep the rodents out of the grain. From supernatural kaibyō in Japanese folklore to felines’ divine status in ancient Egypt, the animals have had an indelible influence on mythology, history, and our daily lives for a very long time.

A digital illustration by Xuan Loc Xuan of a white cat walking through nasturtiums
Xuan Loc Xuan, “Nasturtium Cat” (2023), digital painting, 9 7/8 × 11 3/8 inches. Image courtesy of the artist

Forthcoming from Phaidon, the book Cat celebrates, well, exactly what you’d expect. From contemporary sculpture and illustrations to early photography and internet memes, the volume runs the gamut of feline personalities and depictions in art throughout the millennia. Yet no matter how diverse the portrayals or how long ago they were created, the creatures’ expressiveness—even ridiculousness—is universally relatable.

Cat surveys an immense range of mediums and eras, from medieval illuminated manuscripts to modern street art. Colossal readers may be familiar with artists like Xuan Loc Xuan, Lee Sangsoo, and Utagawa Hiroshige, among many others, whose multimedia explorations of feline nature fill the playful tome.

Slated for release on February 11, Cat is available for pre-order in the Colossal Shop.

A cartoonish drawing of a blue-black cat by Bill Traylor
Bill Traylor, Untitled (Midnight Blue Cat) (c. 1939–42), poster paint on found cardboard, 11 × 8 inches. Image © Bill Traylor Family Inc. – WhosBillTraylor.com: Ricco/Maresca Gallery
An illustration by Hiroshige of a white, tailless cat with a ribbon around its neck, playing with another ribbon, set against a green background
Utagawa Hiroshige II, “A White Cat Playing with a String” (1863), woodcut, 8 3/8 × 10 1/2 inches. Image courtesy of the Minneapolis Institute of Art
A painting by Sally J. Han of a young woman sleeping in a colorful bed with a cat by her head
Sally J. Han, “Nap” (2022), acrylic paint on paper mounted on wood panel, 24 × 30 inches. © Sally J. Han. Photo by Jason Mandella
A 19th-century illustration of a tabby cat by Nathaniel Currier
Nathaniel Currier, “The Favorite Cat” (1838–48), hand-colored lithograph, 12 1/4 × 8 5/8 inches. Image courtesy of The Metropolitan Museum of Art
An oil painting by Jodie Niss of a cat slumped and sleeping comically in a corner by a mirror
Jodie Niss, Untitled (#2) (2022), oil on wood panel, 16 × 12 inches. Image courtesy of the artist
An array of 90 cat figurines, part of a multimedia artwork by Andy Holden
Andy Holden, “Cat-tharsis” (2022), 90 cat figurines and HD video with music by The Grubby Mitts, 17 minutes. Image courtesy of the artist and Charles Moffett, New York. Photo by Thomas Barratt

Do stories and artists like this matter to you? Become a Colossal Member today and support independent arts publishing for as little as $7 per month. The article ‘Cat’ Is a Purr-fect Celebration of Felines in Art Throughout the Centuries appeared first on Colossal.

Read the whole story
mrmarchant
23 hours ago
reply
Share this story
Delete
Next Page of Stories