What each tool is
Sunglasses
Sunglasses is a free, MIT-licensed Python library that scans every input an AI agent processes — text, documents, code, MCP tool descriptions, READMEs, retrieval results, and agent-to-agent messages — before the agent acts on it. It runs a 3-stage pipeline: normalize the input across 17 deterministic techniques (URL decode, Unicode normalization, homoglyph mapping, base64 decode, and 13 others), detect against 444 patterns across 54 attack categories, then decide block / review / allow. The decision happens in an average of 0.261ms with no network call.
Sunglasses is fully local. There is no hosted endpoint, no API key, and no telemetry by default. Every pattern, normalization technique, and detection keyword is publicly inspectable at github.com/sunglasses-dev/sunglasses. Install with pip install sunglasses.
Lakera Guard
Lakera Guard is a commercial AI security product built by Lakera AI. It is primarily delivered as a hosted API: your application sends text inputs to Lakera's endpoint, and Lakera returns a verdict indicating whether the input is safe. Lakera's detection approach and model architecture are not publicly disclosed. Lakera Guard targets enterprise teams that prefer a managed service with professional support and enterprise procurement pathways over self-hosting. For current pricing and enterprise tier details, see lakera.ai directly — pricing is not quoted here because Lakera has not published a standard rate card as of this writing.
Honest framing: This page is written by the Sunglasses team. We have tried to represent Lakera Guard accurately based on publicly available information. Where Lakera's specifics are not publicly disclosed, we say so explicitly rather than guess. Verify Lakera's current feature set directly at their site.
Side-by-side comparison
| Dimension | Sunglasses | Lakera Guard |
|---|---|---|
| License | MIT (free forever, commercial use OK) | Commercial / proprietary |
| Deployment model | Local Python library — runs in your process, no network call | Hosted API — inputs sent to Lakera's endpoint |
| Data exposure | None by default. Inputs never leave your infrastructure. | Inputs transmitted to Lakera's servers for analysis |
| Pricing | Free (MIT). No tier, no key, no subscription. | Commercial pricing — not publicly disclosed by Lakera |
| Install path | pip install sunglasses |
API integration — requires key provisioning from Lakera |
| Detection patterns | 444 patterns across 54 attack categories (v0.2.27, Apr 30 2026) | Not publicly disclosed by Lakera |
| Language coverage | 23 languages | Not publicly disclosed by Lakera |
| Detection transparency | Fully open source — every pattern inspectable on GitHub | Closed — detection models and rules not published |
| Scan speed | 0.261ms avg, ~3,830 scans/sec (single thread, local) | Network round-trip time + API processing — different methodology, no published direct comparison |
| Telemetry / logging | None by default. Operator controls all logging. | Data processed by Lakera — consult their privacy policy |
| Framework support | Python-native, LangChain, CrewAI, Claude Code MCP, any framework that accepts preprocessed input | API-based — integrates with any HTTP-capable stack |
| SARIF output | Yes (SARIF 2.1.0) | Not publicly documented as a SARIF output |
| Managed service / SLA | No — self-hosted, no SLA | Commercial support available — consult Lakera for SLA terms |
| CVP / third-party evaluation | Anthropic Cyber Verification Program approved (org ID d4b32d1d-…). 7 published reports at /cvp. | Not aware of a published equivalent third-party authorization program as of this writing |
Where the table reads "not publicly disclosed by Lakera" or similar, that reflects the information available at time of writing — not an inference that the feature does not exist. Lakera's product evolves; verify current capabilities at lakera.ai.
When to pick Sunglasses
- Data cannot leave your infrastructure. If your agent processes PII, confidential source code, regulated data, or proprietary IP, a hosted API that receives that content requires legal and compliance review. Sunglasses runs locally — inputs never leave your process.
- You need audit-grade pattern transparency. Sunglasses is fully open source. If your security team needs to know exactly what is being detected, why a pattern fired, and how to adjust thresholds — every line is readable and forkable on GitHub.
- You want zero subscription cost. MIT license means no tier lock, no metering, no renewal. This matters for projects, startups, or open-source agents where a per-call cost model is a structural problem.
- You are building local-first AI agents. Claude Code, Cursor, Windsurf, Cline, and similar local agent environments benefit from a co-located security layer. No round-trip to a remote endpoint means no latency penalty and no dependency on external uptime.
- You want a pre-ingestion boundary layer, not a runtime policy service. As noted in the Sunglasses FAQ, "Sunglasses is strongest as a local ingestion boundary layer." The filter runs before the model reads the input — not as a runtime override after the model has already processed it.
- You are integrating with Python-native stacks. LangChain, CrewAI, AutoGen — call
engine.scan(text)before anymodel.invoke()ortool.call(). It is one import and one function call. See the architecture page and security manual for integration patterns.
When Lakera Guard wins
- You need a fully managed service with enterprise SLAs. Self-hosting means you own the uptime, the updates, and the incident response. If your team needs a vendor-backed SLA for security tooling, Lakera Guard is the natural fit.
- Your procurement process requires a commercial vendor relationship. Enterprise procurement at large organizations often requires a vendor contract, a support tier, and a contact for escalation. MIT open source does not provide those out of the box.
- You prefer not to self-host security tooling. Running and maintaining any library requires engineering attention. Some teams — especially those focused entirely on product, not infrastructure — prefer to delegate security tooling operations to a vendor.
- Your data exposure requirements permit an API-based approach. If your inputs are not regulated or confidential, or if your legal team has cleared a third-party processing arrangement, the hosted model simplifies integration.
- You want Lakera's specific managed detection capabilities. Lakera has invested in their own detection research. If their managed model outperforms for your specific use case, that is a legitimate architectural reason to use it. We do not have a head-to-head published benchmark to cite — different evaluation methodologies mean direct comparison is not available.
The layered security note: These tools are not mutually exclusive. Some teams run a local pre-filter (like Sunglasses) plus a hosted policy layer for enterprise oversight. The security model that works is the one you actually deploy — not the one you picked on a feature matrix but never wired in. See the Open Source AI Agent Security Scanner page for broader context on how local and managed layers compose.
The Garak angle — research tool, not runtime filter
Garak frequently appears in the same search queries as Sunglasses and Lakera Guard. It is worth clarifying the distinction because the tools do fundamentally different things.
Garak is a red-teaming and benchmarking tool. You point Garak at an AI model, it fires adversarial inputs, and it measures how the model responds — specifically, whether it refuses or complies with attack prompts. Garak's purpose is evaluation and research: understanding how robust a model is against known attack families. It is a probing tool, not a runtime filter.
Sunglasses and Lakera Guard are runtime ingestion filters. They sit in the path between incoming content and the agent, and they make block / review / allow decisions at inference time. You deploy them in production, not just in evaluation.
As stated in the Sunglasses FAQ, "tools like Garak focus on probing" while Sunglasses focuses on "normalization-first pre-ingestion defense plus transparent pattern evolution." Garak and Sunglasses are complementary: use Garak to benchmark your model's baseline posture, use Sunglasses to filter inputs in production. They do not compete.
For MCP-specific attack patterns and how they relate to runtime filtering, the MCP Attack Atlas covers the threat surface in detail. For published evaluation results using a third-party authorized framework, see the CVP page and the reports index.