insidejob
AML.T0024 Realized

Exfiltration via AI Inference API

Tactic: Exfiltration

This technique has been observed in real-world attacks on AI systems.

Adversaries may exfiltrate private information via [AI Model Inference API Access](/techniques/AML.T0040). AI Models have been shown leak private information about their training data (e.g. [Infer Training Data Membership](/techniques/AML.T0024.000), [Invert AI Model](/techniques/AML.T0024.001)). The model itself may also be extracted ([Extract AI Model](/techniques/AML.T0024.002)) for the purposes of [AI Intellectual Property Theft](/techniques/AML.T0048.004).

Exfiltration of information relating to private training data raises privacy concerns. Private training data may include personally identifiable information, or other protected data.

Sub-techniques 3