Sunglasses is a filter that sits ahead of your agent. Always ON. If you're building a custom Python agent on top of an LLM SDK — Anthropic, OpenAI, Gemini — this is the SDK wiring path. One import, one scan call, mandatory filter before every LLM invocation.

Who this page is for

Developers building custom Python agents on top of raw LLM SDKs (Anthropic, OpenAI, Google Gemini) or without a framework. You control the code — the filter is a 2-line drop-in before your LLM call.

The command

from sunglasses.engine import SunglassesEngine

Filter every LLM call in your custom Python agent. Import the engine, call scan() on any untrusted input, branch on decision (block / warn / allow). Drop in front of api.messages.create(), client.chat.completions.create(), or any model invocation in your code.

Benefit: You own your agent's security layer without sending anything to the cloud. Local, fast, composable with any SDK.

Full walkthrough coming next. This is an identity-first scaffold — the core command and wiring pattern are ready. The full step-by-step code walkthrough, integration examples, and troubleshooting section are being drafted now. Check back shortly.

Where this wiring fits

Sunglasses is one filter with four wiring options. This page covers the Custom Python Agent path. Other wiring paths:

Same filter underneath. Different wiring based on your stack.