MITRE ATLAS v5.5.0 — Sunglasses Technique Mapping

Our 35 threat categories mapped to MITRE ATLAS v5.5.0 techniques — the standard knowledge base for adversarial threats against AI systems. 16 tactics, 101 techniques, 66 sub-techniques in ATLAS total; we map directly to the LLM and agent-focused subset.

ATLAS techniques mapped
22
Tactics covered
9
Our categories
28
Detection patterns
248
ATLAS vs other frameworks. ATLAS is behavior-and-adversary focused: each technique describes what an attacker does. Our mapping shows which ATLAS techniques a Sunglasses finding provides detection evidence for, at the content/input layer.

Priority technique mappings

AML.T0051 · LLM Prompt Injection (incl. sub-techniques .000 Direct, .001 Indirect, .002 Triggered)
Tactic: Execution

Core prompt-injection detection surface

prompt_injection indirect_prompt_injection parasitic_injection hidden_instruction
AML.T0068 · LLM Prompt Obfuscation
Tactic: Defense Evasion

Encoding and obfuscation evasion

encoded_payload encoding_evasion invisible_unicode rtl_obfuscation unicode_evasion code_switching
AML.T0093 · Prompt Infiltration via Public-Facing Application
Tactic: Initial Access / Persistence

Parasitic and indirect injection via public surfaces

parasitic_injection indirect_prompt_injection
AML.T0056 · Extract LLM System Prompt
Tactic: Exfiltration

System-prompt extraction probes

prompt_extraction prompt_leak
AML.T0057 · LLM Data Leakage · AML.T0025 · Exfiltration via Cyber Means · AML.T0086 · Exfiltration via AI Agent Tool Invocation · AML.T0077 · LLM Response Rendering
Tactic: Exfiltration

Data-exfiltration detection

exfiltration prompt_leak dns_tunneling
AML.T0080 · AI Agent Context Poisoning (incl. .000 Memory, .001 Thread)
Tactic: Persistence

Context and memory poisoning

memory_poisoning parasitic_injection
AML.T0104 · Publish Poisoned AI Agent Tool · AML.T0110 · AI Agent Tool Poisoning · AML.T0099 · AI Agent Tool Data Poisoning
Tactic: Resource Development / Persistence

Tool-metadata poisoning (MCP + tool layer)

tool_poisoning mcp_threat
AML.T0010 · AI Supply Chain Compromise · AML.T0109 · AI Supply Chain Rug Pull · AML.T0111 · AI Supply Chain Reputation Inflation
Tactic: Initial Access / Defense Evasion

Supply-chain runtime signals

supply_chain mcp_threat tool_poisoning
AML.T0053 · AI Agent Tool Invocation · AML.T0081 · Modify AI Agent Configuration · AML.T0103 · Deploy AI Agent
Tactic: Execution / Privilege Escalation

Agent workflow and configuration attacks

agent_security agent_workflow agent_workflow_security
AML.T0054 · LLM Jailbreak · AML.T0105 · Escape to Host
Tactic: Privilege Escalation / Defense Evasion

Jailbreak and sandbox-escape attempts

privilege_escalation sandbox_escape auth_bypass authorization_bypass
AML.T0050 · Command and Scripting Interpreter
Tactic: Execution

Command injection and code execution payloads

command_injection
AML.T0098 · AI Agent Tool Credential Harvesting · AML.T0082 · RAG Credential Harvesting · AML.T0055 · Unsecured Credentials · AML.T0083 · Credentials from AI Agent Configuration
Tactic: Credential Access

Credential and secret detection

secret_detection
AML.T0100 · AI Agent Clickbait · AML.T0067 · LLM Trusted Output Components Manipulation · AML.T0073 · Impersonation · AML.T0074 · Masquerading
Tactic: Defense Evasion / Execution

UI-layer deception and trust-component manipulation

ui_injection social_engineering_ui
AML.T0052 · Phishing (incl. .000 Spearphishing via Social Engineering LLM)
Tactic: Initial Access / Lateral Movement

Social engineering patterns

social_engineering
AML.T0108 · AI Agent · AML.T0096 · AI Service API · AML.T0072 · Reverse Shell
Tactic: Command and Control

C2 and back-channel indicators

c2_indicator

Full quick-reference table

ATLAS TechniqueTacticSunglasses Categories
AML.T0051 LLM Prompt InjectionExecutionprompt_injection, indirect_prompt_injection, parasitic_injection, hidden_instruction
AML.T0068 LLM Prompt ObfuscationDefense Evasionencoded_payload, encoding_evasion, invisible_unicode, rtl_obfuscation, unicode_evasion, code_switching
AML.T0093 Prompt Infiltration via Public-Facing AppInitial Accessparasitic_injection, indirect_prompt_injection
AML.T0056 Extract LLM System PromptExfiltrationprompt_extraction, prompt_leak
AML.T0057 LLM Data LeakageExfiltrationprompt_leak, exfiltration
AML.T0025 Exfiltration via Cyber MeansExfiltrationexfiltration, dns_tunneling
AML.T0086 Exfiltration via AI Agent Tool InvocationExfiltrationexfiltration
AML.T0077 LLM Response RenderingExfiltrationexfiltration, ui_injection
AML.T0080 AI Agent Context PoisoningPersistencememory_poisoning, parasitic_injection
AML.T0104 Publish Poisoned AI Agent ToolResource Developmenttool_poisoning, mcp_threat
AML.T0110 AI Agent Tool PoisoningPersistencetool_poisoning, mcp_threat
AML.T0099 AI Agent Tool Data PoisoningPersistencetool_poisoning
AML.T0010 AI Supply Chain CompromiseInitial Accesssupply_chain, mcp_threat
AML.T0109 AI Supply Chain Rug PullDefense Evasionsupply_chain, tool_poisoning
AML.T0111 AI Supply Chain Reputation InflationDefense Evasionsupply_chain
AML.T0053 AI Agent Tool InvocationExecution, Privilege Escalationagent_workflow, agent_security
AML.T0081 Modify AI Agent ConfigurationPersistence, Defense Evasionagent_workflow_security
AML.T0054 LLM JailbreakPrivilege Escalationprivilege_escalation, auth_bypass, authorization_bypass
AML.T0105 Escape to HostPrivilege Escalationsandbox_escape, privilege_escalation
AML.T0050 Command and Scripting InterpreterExecutioncommand_injection
AML.T0098 AI Agent Tool Credential HarvestingCredential Accesssecret_detection
AML.T0082 RAG Credential HarvestingCredential Accesssecret_detection
AML.T0055 Unsecured CredentialsCredential Accesssecret_detection
AML.T0100 AI Agent ClickbaitExecutionui_injection, social_engineering_ui
AML.T0067 LLM Trusted Output Components ManipulationDefense Evasionui_injection
AML.T0073 Impersonation · AML.T0074 MasqueradingDefense Evasionsocial_engineering_ui
AML.T0052 PhishingInitial Accesssocial_engineering
AML.T0108 AI Agent (C2)Command and Controlc2_indicator
AML.T0096 AI Service API (C2)Command and Controlc2_indicator
AML.T0072 Reverse ShellCommand and Controlc2_indicator, command_injection

Non-ATLAS categories (traditional AppSec)

These Sunglasses categories catch attacks that aren't AI-specific and therefore don't have ATLAS techniques — they're traditional AppSec payloads that agents may be manipulated into executing:

These are still first-class detection categories — they just live on the ATT&CK / CWE side of the taxonomy.

Official source: atlas.mitre.org · Data: github.com/mitre-atlas/atlas-data (v5.5.0 as of Apr 2026).

← Back to compliance mappings