AI Disclosure Rules: A Call for Transparency
Examining the need for mandatory disclosure rules when AI systems interact with the public, and the implications for trust and accountability.
Read or Download
Download Options
Read Online
Full text view generated from uploaded source documents.
AI Disclosure Rules: A Call for Transparency
Imported from AI-Disclosure-Rules-Op-Ed.pdf
AI Disclosure Rules Arenât About Integrity, Theyâre About Control By: Rich Vengels December 16, 2025 Courts across the country are beginning to require mandatory âAI disclosureâ or âAI certificationâ statements in legal filings. These rules are often justified as necessary to preserve judicial integrity and prevent fabricated citations or facts from entering the record. That explanation is convenient. It is also incomplete. What these requirements actually do is shift institutional risk away from the judiciary and licensed attorneys, and onto filers who already sit at the bottom of the legal hierarchy, especially pro se litigants. In doing so, they reinforce a double standard that has existed for decades and is now being exposed by artificial intelligence. This Problem Existed Long Before AI Nothing about fabricated arguments, misleading citations, or careless filings is new. Attorneys have always been capable of making things up, overstating authority, or submitting sloppily researched briefs. Courts have always relied on professional norms, Rule 11, and selective enforcement to manage that risk. What AI changed was not the existence of error, it changed the visibility of it. AI made it possible for errors to appear systematic rather than isolated, and for those errors to be discovered after the fact. That embarrassed the system. So, the system responded. ⢠⢠â˘
Not by improving verification. Not by modernizing citation checks. Not by holding everyone to the same standard.
But by demanding new attestations. A Quiet but Important Shift Traditionally, attorneys avoid sworn factual statements wherever possible. Certificates of service, for example, are deliberately not affidavits. They are not sworn under oath. They exist precisely to avoid perjury exposure while preserving procedural efficiency. That is not a flaw, it is a design choice.
Yet AI disclosure requirements flip this logic for certain filers. They compel individuals to affirmatively certify facts about how a document was prepared, under penalty of sanctions, contempt, or worse, as a condition of accessing the court. Functionally, this is not far removed from a forced affidavit. And that should concern anyone who cares about constitutional protections, particularly the Fifth Amendmentâs prohibition on compelled self-incrimination and the broader principle that access to courts should not require self-exposure to future punishment. The Double Standard No One Wants to Name Here is the uncomfortable reality: attorneys are using AI extensively. Judges know this. Courts know this. Legal organizations know this. But attorneys are insulated by professional status, discretionary enforcement, and long-standing norms that treat their representations as presumptively trustworthy. Pro se litigants are not. AI has done something unprecedented: it has allowed non-lawyers to produce filings that look, sound, and function like attorney work. That erodes the informal gatekeeping mechanisms courts have relied on for generations. For the first time, legal fluency is no longer exclusive to those who passed a bar exam or can afford hourly rates that rival mortgage payments. That is the real disruption. Why âJudicial Integrityâ Rings Hollow If these rules were truly about integrity, courts would pursue solutions that actually improve accuracy: ⢠⢠⢠â˘
Automated citation verification tools Mandatory source attachments for novel authority Reciprocal disclosure of AI use by courts and clerks Uniform sworn standards applied equally to all filers
Instead, courts chose paperwork, selectively imposed. That choice reveals the real priority: liability transfer, not truth. By forcing filers to certify process details, courts insulate themselves from responsibility for what they accept, rely on, or fail to scrutinize. If something goes wrong later, the blame has already been assigned. The Pro Se Problem No One Will Admit Pro se litigants rely on AI more heavily because they must. They do not have law clerks, Westlaw subscriptions, or institutional patience. AI is not a shortcut for them, it is an access tool. When courts impose additional disclosure burdens on those litigants while tolerating informal, unsworn representations from licensed attorneys, they are not regulating technology. They are policing class. That raises serious due process and equal protection concerns, whether courts acknowledge them or not. The Long-Term Risk The danger is not that courts will misuse AI rules intentionally. It is that these rules will quietly normalize a two-tier system: ⢠â˘
AI as an invisible efficiency tool for insiders AI as a regulated liability trap for outsiders
That is not justice. It is credentialism enforced by procedure. AI did not break the legal system. It exposed how much of it rests on trust, status, and discretionary forgiveness rather than consistent standards. The question now is whether courts will respond by modernizing honestly, or by retreating behind new forms of attestation that preserve power while claiming neutrality. History suggests the answer. âArtificial intelligence did not erode judicial integrity. It challenged the monopoly on legal fluency, and the system is reacting exactly as monopolies always do.â