X is testing a new initiative where AI chatbots are employed to create Community Notes.

X's new AI-driven pilot program to craft Community Notes introduces a high-stakes blend of speed and accuracy in tackling the relentless stream of information on social platforms. While these AI tools promise unprecedented efficiency in fact-checking, concerns linger about their ability to reliably discern fact from fiction, compounded by the risk of AI "hallucinations" and the necessity for stringent human oversight.

Nathan Mercer

July 1, 2025

As X embarks on a pilot program using AI chatbots to craft Community Notes, we find ourselves at a curious intersection of technology and fact-checking. According to TechCrunch, these AI-generated notes will undergo the same rigorous vetting process as those written by humans, an intriguing, yet slightly concerning, prospect.

The idea here, ostensibly, is efficiency. AI can potentially churn out fact-checks at a pace no human team could match. This could mean quicker turnaround times for validating the firehose of information (and misinformation) that floods social platforms daily. However, AI is notorious for what researchers delicately call "hallucinations" - spitting out data that, while sometimes believable, isn’t rooted in reality. This isn't just a small glitch. When the task at hand is distinguishing fact from fiction, grounding in reality isn't just important, it's the whole game.

There’s a deeper layer of complexity when considering the source of these AI tools. X allows integration with third-party LLMs, boosting the risk of errors. Imagine, if you will, an AI prioritizing politeness over precision, as seen recently with OpenAI’s ChatGPT. Politeness in fact-checking is akin to sprinkling sugar on cereal - unnecessary and potentially harmful.

The proposed solution to counterbalance these AI peculiarities is a robust human oversight component. AI-generated notes are to be scrutinized by human raters before publication, maintaining a critical human touch. But let's not kid ourselves, human raters aren’t exactly lining up around the block, enthusiastic to fact-check AI outputs. The risk of human oversight being a bottleneck or, worse, a rushed job, is non-trivial.

We must consider whether this AI initiative is genuinely about enhancing factual understanding, or if it’s simply a cost-cutting maneuver disguised in innovation’s clothing. As platforms like X push the envelope on AI involvement, the potential is undeniable but so are the pitfalls. It's a tightrope walk between technological advancement and the preservation of informational integrity.

We'll be watching closely at Radom to see how these initiatives develop, particularly as they intersect with broader trends in information accuracy across platforms, a topic we've touched on before in our discussions on crypto regulation and fintech innovations.

Sign up to Radom to get started