Full AI Model Access
Tactic: AI Model Access
This technique has been demonstrated in research or controlled environments.
Adversaries may gain full "white-box" access to an AI model. This means the adversary has complete knowledge of the model architecture, its parameters, and class ontology. They may exfiltrate the model to [Craft Adversarial Data](/techniques/AML.T0043) and [Verify Attack](/techniques/AML.T0042) in an offline where it is hard to detect their behavior.