Supreme Court affirms that AI-generated responses cannot be used as evidence in court proceedings

In the evolving landscape of Ukraine’s judicial system, the use of artificial intelligence as a source of evidence has become a matter of increasing debate.
The latest decision by the Cassation Economic Court within the Supreme Court clearly established that responses generated by AI systems are not recognized as reliable or scientifically substantiated sources of information in economic or civil cases.
According to the ruling dated July 8, 2025, in case No.
925/496/24, courts emphasized that AI responses—such as those from Grok (developed by xAI) and ChatGPT (by OpenAI)—cannot serve as trustworthy evidence.
Legal practice has reiterated that technology should be used solely to support, not to replace, judicial decision-making.
The authority to issue rulings must remain exclusively with judges, and delegation of this function to AI is strictly prohibited.
In the case under review, the AI responses were used not to aid justice but to challenge the court’s previous findings, which contradicts fundamental legal principles.
The court rightly rejected the motion to examine AI responses as evidence, reaffirming that such responses are not scientifically validated sources.
This underscores the importance of maintaining judicial independence and the presumption that human judgment remains irreplaceable in administering justice.