检索增强生成技术司法应用的次生幻觉风险与规制

Mitigating Secondary Hallucinations in Judicial RAG Systems: Risks and Legal Governance

  • 摘要: 以人工智能(AI)为代表的大模型以概率预测为运行逻辑,天然存在偏离事实的“幻觉”(Hallucination)现象。为破解这一难题,司法实践引入了检索增强生成(Retrieval-Augmented Generation, RAG)技术,试图通过先检索权威案典、后辅助文本生成的方式,提升AI智能检索生成结果的规范性和准确性。然而,RAG技术的司法应用并不能根除幻觉,反而会衍生出关联迷失、合成虚构、引导偏见等问题,这些问题借助“权威检索资料”的外观隐入司法过程,给追求证明严谨性、审判权威性的司法活动带来严重的“次生幻觉”(Secondary Hallucination)风险。因此,针对RAG技术司法应用的“次生幻觉”风险,亟需建立有效的规制体系。在程序规制层面,应当建立生成内容的强制溯源审查制度;在责任规制层面,应当明确幻觉下的司法过错责任判定;在交互规制层面,应当创设友好人机协作校正模式;在标准规制层面,应当建立法律垂域大模型应用的准入标准。通过前述路径,方能有效规制RAG技术司法应用中的次生幻觉风险,使机器理性服从于法律理性,在维持技术谦抑的过程中实现数字正义。

     

    Abstract: LLMs, represented by Artificial Intelligence (AI), operate on a foundational logic of probabilistic prediction, rendering them inherently susceptible to “hallucinations”, a phenomena that deviate from factual reality. To address this challenge, judicial practice has integrated Retrieval-Augmented Generation (RAG) technology, which seeks to enhance the normality and accuracy of AI-generated outputs by first retrieving authoritative legal precedents and statutes prior to assisting in text generation. However, the judicial application of RAG technology does not eradicate hallucinations; rather, it gives rise to a new suite of issues, including associative dislocation, synthetic fabrication, and leading bias. These problems, shrouded in the “authoritative facade” of retrieved materials, infiltrate the judicial process, posing severe “secondary hallucination” risks to legal activities that prioritize evidentiary rigor and adjudicative authority. Consequently, there is an urgent need to establish a robust regulatory framework to mitigate the secondary hallucination risks of RAG in judicial settings. At the level of procedural regulation, a mandatory traceability and review system for generated content must be instituted; in terms of liability regulation, clear criteria for determining judicial fault in the context of hallucinations should be defined; regarding interaction regulation, a user-friendly human-computer collaborative correction mode ought to be created; and finally, for standardized regulation, rigorous market entry standards for LLMs in the legal domain should be established. Only through these pathways can the secondary hallucination risks of RAG technology be effectively governed, ensuring that machine rationality remains subordinate to legal rationality and achieving digital justice while maintaining technological humility.

     

/

返回文章
返回