insidejob
AML.T0056 Feasible

Extract LLM System Prompt

Tactic: Exfiltration

This technique is theoretically possible but has not been publicly demonstrated.

Adversaries may attempt to extract a large language model's (LLM) system prompt. This can be done via prompt injection to induce the model to reveal its own system prompt or may be extracted from a configuration file.

System prompts can be a portion of an AI provider's competitive advantage and are thus valuable intellectual property that may be targeted by adversaries.