Abstract:
LLMs, represented by Artificial Intelligence (AI), operate on a foundational logic of probabilistic prediction, rendering them inherently susceptible to “hallucinations”, a phenomena that deviate from factual reality. To address this challenge, judicial practice has integrated Retrieval-Augmented Generation (RAG) technology, which seeks to enhance the normality and accuracy of AI-generated outputs by first retrieving authoritative legal precedents and statutes prior to assisting in text generation. However, the judicial application of RAG technology does not eradicate hallucinations; rather, it gives rise to a new suite of issues, including associative dislocation, synthetic fabrication, and leading bias. These problems, shrouded in the “authoritative facade” of retrieved materials, infiltrate the judicial process, posing severe “secondary hallucination” risks to legal activities that prioritize evidentiary rigor and adjudicative authority. Consequently, there is an urgent need to establish a robust regulatory framework to mitigate the secondary hallucination risks of RAG in judicial settings. At the level of procedural regulation, a mandatory traceability and review system for generated content must be instituted; in terms of liability regulation, clear criteria for determining judicial fault in the context of hallucinations should be defined; regarding interaction regulation, a user-friendly human-computer collaborative correction mode ought to be created; and finally, for standardized regulation, rigorous market entry standards for LLMs in the legal domain should be established. Only through these pathways can the secondary hallucination risks of RAG technology be effectively governed, ensuring that machine rationality remains subordinate to legal rationality and achieving digital justice while maintaining technological humility.