How our 35 threat categories and 248 detection patterns map to the frameworks procurement, compliance, and security teams already trust.
The industry standard for LLM application security. We cover 7 of 10 risks with named pattern categories; 3 risks sit outside our surface and we link to tools that handle them.
View mapping →The MITRE adversarial knowledge base for AI systems. Our categories map to techniques including AML.T0051 Prompt Injection, AML.T0104 Publish Poisoned AI Agent Tool, AML.T0080 Context Poisoning, and more.
OWASP's 2026 agentic threat list. Verified directly against the official PDF. We cover 6 of 10 risks with named pattern categories; ASI07 Inter-Agent Communication is our planned v0.3.0 work.
View mapping →Mapping to NIST AI RMF GOVERN, MAP, MEASURE, MANAGE functions. Planned for next proof-package push.
Read NIST AI RMF →Every scan can emit SARIF 2.1.0 — the same format GitHub Advanced Security, Snyk, and Semgrep use for code scanning. Pipe Sunglasses into your CI exactly like you would any other SAST tool.
# Get SARIF output from any scan sunglasses scan --file agent_config.json --output sarif > findings.sarif # Pipe into GitHub Advanced Security gh code-scanning upload-sarif --file findings.sarif
A pattern library only matters if it plugs into how security teams actually work. Framework mappings give buyers, auditors, and CI/CD pipelines a common vocabulary for what Sunglasses catches — without anyone having to read our internal taxonomy.
If you spot a mapping that's wrong, missing, or could be sharper, open an issue. We update these pages as the frameworks evolve.