insidejob
AML.T0076 Realized

Corrupt AI Model

Tactic: Defense Evasion

This technique has been observed in real-world attacks on AI systems.

An adversary may purposefully corrupt a malicious AI model file so that it cannot be successfully deserialized in order to evade detection by a model scanner. The corrupt model may still successfully execute malicious code before deserialization fails.