The people who create social media platforms say they want a good portion of the content to land on the entertaining-to-informative axis. To me, the result reads more like mining our collective attention by turning the short-form video format into a slot machine that relentlessly targets our dopamine and fear receptors until we’re all a bunch of strung-out content junkies. But for the sake of argument, let’s assume these platforms are interested in more than just stacking money so high it disrupts air traffic.
We live in an age where AI-generated slop pours into your feed from every angle—news anchors, influencers, fitness coaches, body cam footage that looks and sounds real but isn’t. At this point we’ve all wrung our hands over AI-generated misinformation (and disinformation!), and rightly so, but what about just plain-old AI-generated information? I think it also deserves a fair bit of hand-wringing.
One of the goals of Riddance is to help people reliably converge on true beliefs without being coerced. And boy howdy, AI personas are not gonna help you get there. The capacity for one human to impart knowledge onto another human is a very subtle one. It happens only infrequently on social media from what I can tell, but it does happen.
I don’t think that AI personas, even when they say words that are true, can transfer knowledge to you. And so insofar as a piece of content purports to be informative (a thing these companies say they care about), if it’s delivered by an AI, you don’t walk away with knowledge.
Two examples of getting knowledge by testimony
Imagine your friend calls to say they’ve run out of gas on the highway and need a pickup. You believe them. Suppose it’s true their car is out of gas, and you’re justified in believing them—you know this person, they’ve never lied to you about something like this before, and you can hear the frustration over the phone. Congratulations! You’ve just gained a piece of knowledge by testimony. Now get in your car and go help them out.
Some things to notice about this exchange while you drive over there. Your friend is your friend; you know who they are and have background reasons to trust them. You can ask them follow-up questions (“Which highway?” “Why are you so dumb?”). And most important of all, there are conversational stakes: if you drove all the way out there and it turned out they were pranking you for a TikTok video, there would be consequences—wrath and reputational harm are some of the things that come to mind.
Now consider a different example that’s lightly paraphrased from a popular one in the testimony literature. Suppose a high school biology teacher goes down a few too many YouTube rabbit-holes and stops believing in evolution by natural selection. Bummer. Unfortunately for him, his textbook disagrees, and he has to teach the section on evolution despite not believing in evolution. Suppose he covers the material accurately and thoroughly and assigns the right readings. Do the students gain knowledge?
Most people’s intuition is yes, the students do learn. But in these cases our (oddly?) stoic biology teacher is functioning more like a conduit for knowledge rather than a source of knowledge, whereas the friend who needed gas seemed to be a source. Despite these two cases differing substantially, they have a lot in common that make them both good candidates for knowledge by testimony.
There are conversational stakes for lying. The friend risks the friendship and social status; the teacher risks his job and professional reputation. These stakes create accountability.
The speaker is identifiable. You know who your friend is. Students know who the teacher is.
The listener has background reasons to treat them as reliable. Past interactions, institutional roles, and social context all play a part in generating trust.
Follow-up questions are possible. You can ask your friend if the gas light was on. Students can raise their hand in class.
How many of these properties do you think AI personas on social media have? If you answered none, you could stop reading now! But please don’t. In order to understand why certain AIs cannot testify knowledge, it’s necessary to bring some machinery from epistemology and the testimonial knowledge literature to the fore.
Lightning round: Epistemology and testimonial knowledge
Philosophers disagree about almost everything, including what exactly constitutes knowledge. However, epistemologists, philosophers who study knowledge, manage some agreement on the subject before views diverge. Most agree knowledge that p where p is some proposition (e.g., “The US and Israel just started a war in Iran”) requires at least the following to all hold:
Belief: in order for someone to know that p, they must believe that p.
Truth: in order for someone to know that p, p must be true.
Justification: in order for someone to know that p, they must be justified in their belief that p.
This is often referred to as the justified true belief (JTB) account, and while it sounds good, it’s widely agreed to be jointly insufficient for knowledge. So long as one doesn’t require absolute certainty (infallibility) for a belief to count as justified, there is always the possibility that something you believe is true and you are justified in believing it, but the reason the thing is true is not related to your justification. In fact, in these cases, your (fallible) justification turns out to be unwarranted. Situations like these are known as Gettier problems, named after Edmund Gettier who first identified them as a general phenomenon in his delightfully short paper on the subject.
Epistemologists who study testimony, used here as a term of art that roughly means a speaker asserting something to a listener, can be grouped in many ways, but the most common division is on the reductionist/non-reductionist spectrum.
Reductionists don’t think that testimony can be a fundamental source of knowledge. The idea is that the speaker is a reliable indicator but basically just another component of a broader justification pipeline. Their testimony rides on top of or bundles the other mechanisms one has for generating justified beliefs.
Non-reductionists think the reductionist view is too demanding, and so they treat testimony as its own independently justified property, akin to visual perception or taste. And while this status as a source of justification is defeasible, in the normal case, knowledge as the norm for assertion is sufficient for one person to testify knowledge to another.
Hybrid views try to combine the best of both approaches described above. Some argue that testimony can be a basic type of justification, but only if it’s situated within a broader normative practice with shared expectations, sensitivity to defeaters, and sensitivity to error correction. On these views, in which testimony is still special, it really matters that it happens in an environment where testifiers have social and conversational stakes that curl back on them.
The examples from earlier, your friend who needs gas and the pilled teacher, reflect some of the fault lines in the testimony literature. But more importantly, the four qualities they have in common snap into focus. Justification might be described as a type of relationship between your belief in something and the truth of that belief. The more we know about what matters in that relationship the better.
Why AI personas cannot testify knowledge
Even if LLMs can transmit knowledge to a user in the right circumstances, AI avatars or personas are in a completely different boat. Interactions with chatbots, while far from perfect, actually do have a lot of the properties described above. You can push back on something they say, make a decision not to use them for certain tasks (counting the number of letters in certain fruits), and follow-up with questions. What LLMs lack is stakes and accountability, whereas AI personas are lacking in all four departments. These public facing personas don’t have any of the features we identified as common to successful instances of testifying knowledge.
No conversational stakes for lying. Reputational harm is the primary mechanism by which public personas are incentivized to speak truthfully and responsibly. Not only do AI personas not suffer reputational harm, there’s an incentive for them to be irresponsible and inflammatory insofar as that generates engagement.
The speaker is not identifiable. When an AI avatar can change its appearance with a prompt, there is no stable identity for the user to pick out. An AI avatar of a news anchor could be generated by two different models with no shared context between two videos, and the viewer might have no way of knowing.
There are no background reasons to trust AI personas. There is no background to trust other than facts about the account, but this is justification from an entirely different source and has nothing to do with the AI.
Follow-ups are impossible. Users can comment on a video and sometimes get responses from the content creators. Follows-ups from AI personas, if they happen, will almost certainly not come from the AI itself.
While public-facing content in general is less amenable to the various checks and balances that listeners impose on speakers when deciding whether to update their beliefs on what is said, public-facing AI personas supercharge all of these problems. Given their chameleon status, and the ease with which they migrate to new accounts, reputational checks do little to deter AI personas from lying. Given this abject failure to meet basic and broad standards for testimonial knowledge acquisition, it follows that even if AI personas or avatars on social media tell you something true and you believe them, you don’t get knowledge because your belief is not justified.1
Implications for AI on social media
Even before AI, social media was relatively hostile to knowledge transfer via testimony. Many of the design features central to any short-form video platform seem custom-built to flout the requirements for testimonial knowledge outlined above. Short-form videos are hard to search over, hard to verify, have low stakes for lying, and present little follow-up opportunity for the user. Adding AI to the mix is just a six fingered slap in the face to anyone hoping to reliably update their worldview from interacting with these content sources.
Even more unfortunate, measures the platforms do have in place are basically optimized to prevent real people from pretending to be something or someone they aren’t. Many AI personas exist for the sole purpose of pretending to be someone or something they aren’t. Furthermore, the calculus for pushing AI personas on these platforms is so different the moderation policies are much less effective against AI. If I’m making public facing AI content for engagement (whether that’s an influencer, a reaction video, a fake arrest, or an AI anchor) I care much less that an account gets actioned than I do if I’m a real person putting in effort to post by hand. I can price the moderation actions into my operation (assume 60% get axed, how much AI slop do I have to make) and go from there.
Until social media platforms completely demonetize synthetic AI content and increase capacity for labelling synthetic content as such, it is unlikely this problem will go away. Insofar as social media platforms are places people go for knowledge and not just entertainment, AI content on these platforms is parasitic on that purpose.
Interestingly, if the AI avatar is so good you don’t know that it’s AI, and if the account is well-managed to say things that are true, and you believe what it says, we get back into Gettier territory.





