Discover AI Model Outputs
Tactic: Discovery
This technique has been demonstrated in research or controlled environments.
Adversaries may discover model outputs, such as class scores, whose presence is not required for the system to function and are not intended for use by the end user. Model outputs may be found in logs or may be included in API responses. Model outputs may enable the adversary to identify weaknesses in the model and develop attacks.