Abstract:
Artificial intelligence forensic algorithms refer to technical systems that utilize methods such as machine learning to automatically analyze case-related data during the judicial evidence-gathering process and generate conclusions relevant to the facts of the case. With the widespread use of artificial intelligence forensic algorithms in judicial practice, algorithmic conclusions are gradually being incorporated into the evidentiary system and influencing the determination of facts. As methods of evidence collection evolve, existing criminal procedural review mechanisms are no longer sufficient to ensure the reliability of evidence solely at the outcome level. Therefore, the focus of the system must shift to the earlier stages, establishing standardized rules for algorithm generation to address the shortcomings of traditional evaluation pathways. Currently, the application of forensic algorithms in the judicial system is generally plagued by issues such as a lack of standards, unclear review pathways, and ambiguous accountability, making it difficult for algorithmic conclusions to undergo effective procedural scrutiny and cross-examination by the parties involved. Therefore, it is necessary to center the analysis on credibility within the judicial context, distinguishing between technical credibility and legal credibility, and proposing a systematic institutional framework from both static and dynamic dimensions. Through the synergistic interaction of static standards and dynamic institutional mechanisms, the evaluation of algorithmic conclusions will shift from result-based traceability to process control. The aim is to reshape procedural justice in AI-based evidence collection within criminal proceedings through the institutionalized expression of algorithmic credibility, ensuring that algorithmic conclusions operate within a framework that is traceable, verifiable, and open to challenge, thereby achieving a deep integration of technological efficacy and judicial credibility.