Colombian Judiciary Dismisses Appeal for AI Writing Use, Ironically Detected by Its AI Monitoring System

In a striking turn of events, Colombia's Supreme Court has been caught in its own web of AI detection, as its ruling dismissing an appeal for AI involvement was itself found to be predominantly AI-authored. This incident, revealing a 93% AI composition in the court's ruling, underscores the urgent need for a reevaluation of AI tools in legal judgments, where their reliability and transparency are now seriously in question.

Chris Wilson

March 5, 2026

The irony in Colombia's Supreme Court's recent decision is not just palpable; it's a plot twist worthy of Kafka. In rejecting a cassation appeal on the grounds of AI assistance, only to have its own ruling flagged by the same AI detection software, the court has unwittingly showcased the precariousness of relying on AI tools in legal judgments. This episode reveals significant flaws in the AI detection technology that's increasingly infiltrating the judicial system.

Colombia’s Supreme Court argued that the appeal was predominantly authored by AI, as evidenced by an analysis via the Winston AI tool, which purportedly found just 7% human contribution. However, when scrutinized using the same technology, the court's own ruling allegedly contained 93% AI-generated content. This information came to light through attorney Emmanuel Alessio Velasquez's investigation, who used the court-cited Winston software to analyze the ruling. If these tools are as unreliable as they appear to be, one must question their use in legal settings where the stakes - access to justice - are immensely high.

Indeed, this isn't an isolated incident of AI detection tools delivering questionable results. A 2023 study published in Patterns found that over 61% of TOEFL essays by non-native English speakers were mistakenly flagged as AI-generated. Further, Turnitin admitted that its own detector produced high false positive rates when the AI content in a document was below 20%, highlighting the inherent flaws in current AI detection methodologies.

These tools typically analyze statistical patterns including sentence length and vocabulary predictability. However, they struggle with texts that naturally share these characteristics, such as legal documents or academic papers. Consequently, the very people who rely on precision and formality in their writing - legal professionals, academics, and non-native speakers - are most vulnerable to being unfairly penalized by these technologies.

This debacle also underscores a broader issue: the uncritical adoption of AI tools without sufficient transparency. As Carlos Alejandro Torres Pinedo pointed out, no publicly accessible tool can accurately quantify AI involvement in text creation, and the lack of transparency about the algorithms’ functioning further complicates their reliability. If the code and mechanisms behind these AI detectors remain opaque, their role in judicial processes becomes even more contentious.

What’s more problematic is the potential conflict of interest. Tools like the Winston AI detector often suggest 'humanizing' the content through their paid services after flagging it as AI-generated, as noted by criminal defense lawyer Andres F. Arango G. This commercial angle could influence the objectivity of these tools, transforming them into revenue-generating engines rather than impartial analytical tools.

In light of these concerns, Colombia’s judiciary needs to reevaluate its reliance on AI detection software. Rather than a blanket integration of such technology, a more nuanced approach is needed. Incorporating AI tools for administrative tasks might streamline operations but applying them to critical legal decisions, as currently demonstrated, can undermine trust in judicial outcomes.

For those at the intersection of technology and law, this scenario is a stark reminder of the necessity for rigorous scrutiny and regulatory oversight of AI tools in legal settings. As we continue to explore the intersection of AI and law, it is crucial that we do so with a commitment to upholding the integrity of judicial processes. Decisions like those seen in Colombia not only challenge the legitimacy of individual rulings but also risk eroding public confidence in the legal system at large.

If anything, this incident should act as a catalyst for all stakeholders in the judicial and technological fields to engage in a deeper dialogue about the role of AI in law, the reliability of AI tools, and the ethical implications of their use. The goal should not be to eliminate AI from the judicial process but to leverage it responsibly, enhancing rather than undermining the administration of justice.

Sign up to Radom to get started