Skip to content

IT – Can the Answer of AI Support an Inventive Step Attack?

14 May 2026

Niccolò Ferretti

Nunziante Magrone

In a recent decision, the Turin Court of Appeal addressed an emerging issue: can AI-generated outputs support an inventive step attack?

The appellant relied on answers from Google and ChatGPT to argue that the claimed solution reflected common general knowledge.

The Court’s response was clear:

1. Procedural inadmissibility: AI outputs, filed late, were treated as new documents and rejected;

2. No proven reliability: no evidence was provided on the scope and quality of training data and the ability of AI to avoid “hallucinations”.

AI-generated answers were therefore deemed unsuitable as evidence.

The key message:

It seems that AI tools may be useful for exploration, but they are not (yet) evidence, at least without a clear demonstration of reliability and methodological transparency.

A copy of the judgment (in Italian) can be read here.

A copy of the judgment (in English) can be read here.