insidejob
AML.T0112.001 Feasible

AI Artifacts

This technique is theoretically possible but has not been publicly demonstrated.

Adversaries may achieve full system compromise by introducing malicious AI artifacts, such as models or data, that contain embedded malware or other malicious commands. AI artifacts are often stored in model registries or data stores and may affect many systems that pull these resources.

Malicious content stored in AI artifacts may be executed as a result of unsafe serialization formats (e.g. Python pickle) or by other bundled scripts or notebooks.