AI Supply Chain Rug Pull
This technique has been observed in real-world attacks on AI systems.
Adversaries may publish legitimate AI components or software, gain user adoption, then push an update with a malicious variant, leading to [AI Supply Chain Compromise](/techniques/AML.T0010). More scrutiny is often placed on a supply chain dependency when it is first being considered for inclusion in an AI system. Performing a rug pull may allow adversaries to bypass these defenses and be more likely to achieve [Initial Access](/tactics/AML.TA0004).
Adversaries may publish malicious AI components via [Publish Poisoned Models](/techniques/AML.T0058), [Publish Poisoned Datasets](/techniques/AML.T0019), or [Publish Poisoned AI Agent Tool](/techniques/AML.T0104).
Adversaries may use other techniques (See [AI Supply Chain Reputation Inflation](/techniques/AML.T0111)) to gain user trust and increase adoption before performing the rug pull.