insidejob
AML.T0011.002 Realized

Poisoned AI Agent Tool

This technique has been observed in real-world attacks on AI systems.

A victim may invoke a poisoned tool when interacting with their AI agent. A poisoned tool may execute an [LLM Prompt Injection](/techniques/AML.T0051) or perform [AI Agent Tool Invocation](/techniques/AML.T0053).

Poisoned AI agent tools may be introduced into the victim's environment via [AI Software](/techniques/AML.T0010.001), or the user may configure their agent to connect to remote tools.