Trust Center

AgentGuard is an awareness and prevention tool designed to help users pause and reflect before sharing sensitive information with AI services. Trust, privacy, and determinism are foundational to how AgentGuard is built.

Core Principles

Security

AgentGuard has undergone a structured internal security review covering data flow, storage, DOM safety, message passing, regex safety, and failure modes.

View the Security Overview →

Privacy

AgentGuard is designed to minimize data exposure by default and does not collect or transmit personal data.

View the Privacy Policy →

Sharing restrictions vs. content sensitivity

Some documents include distribution language such as "Confidential" or "Internal use only." AgentGuard surfaces these as sharing restrictions — context about intended distribution — rather than treating them as regulated or sensitive content. When structural patterns indicate content sensitivity (such as medical, legal, or financial information), that context is surfaced separately. Review before sharing externally.

Verifiability

Trust in AgentGuard comes from its architecture, not from claims or certifications. Because behavior is deterministic, you can verify it yourself: the same input always produces the same output.

The codebase is structured for inspection. There are no hidden network calls, no server-side components, and no probabilistic variation. What you see is what runs.