Private Beta Open — Request early access to the AI agent firewall

002/

How it works

A technical overview of Manyr's agent firewall architecture. From action interception to audit trail generation.

Step 01

Action Normalization

Every agent framework and tool has its own way of expressing actions. Manyr normalizes these into a consistent "action intent" format that captures:


→ **What** the agent wants to do (operation type)

→ **Where** it wants to do it (target resource)

→ **How** it wants to do it (parameters and context)

→ **Why** it's doing it (optional: reasoning trace from agent)


This normalization happens at the SDK/integration layer, meaning your existing agent code doesn't need to change. We provide adapters for popular frameworks (LangChain, AutoGPT, CrewAI, custom) and tool protocols (OpenAI function calling, Anthropic tool use, MCP).

// Example normalized action intent
{
  "action_type": "file.write",
  "resource": "/var/data/reports/q4.csv",
  "parameters": {
    "content": "...",
    "mode": "overwrite"
  },
  "context": {
    "agent_id": "analyst-bot-1",
    "session_id": "sess_abc123",
    "reasoning": "User requested quarterly export"
  }
}
Step 02

Policy Evaluation

Manyr uses a hybrid evaluation engine designed for both speed and accuracy:


Layer 1: Deterministic Rules (< 1ms)

Fast pattern matching against your configured policies. Most actions are resolved here.


Layer 2: Risk Scoring (< 10ms)

For ambiguous cases, we compute a risk score based on:

- Historical action patterns

- Resource sensitivity classification

- Agent trust level

- Contextual signals


Layer 3: LLM Judge (optional, ~200ms)

For truly ambiguous high-stakes decisions, an optional LLM judge can provide nuanced evaluation. This is configurable and off by default.


The engine returns one of four decisions: **Allow**, **Deny**, **Require Approval**, or **Constrain** (modify the action to reduce scope).

// Example policy rule
{
  "name": "block-production-writes",
  "condition": {
    "resource_pattern": "/var/prod/**",
    "action_types": ["file.write", "file.delete"]
  },
  "decision": "deny",
  "rationale": "Production writes require deployment pipeline"
}
Step 03

Approval Flows

When an action triggers "Require Approval," Manyr can route the request through configurable approval workflows:


→ **Immediate** — Blocks the agent until a human approves (Slack, Teams, email, dashboard)

→ **Async queue** — Batches non-urgent requests for periodic review

→ **Auto-escalate** — Routes to different approvers based on risk level or resource type

→ **Time-boxed** — Auto-denies if not approved within a deadline


Approvals are cryptographically signed and become part of the immutable audit trail. You can see exactly who approved what, when, and with what context.

// Example approval configuration
{
  "approval_channels": [
    { "type": "slack", "channel": "#agent-approvals" },
    { "type": "dashboard", "assignees": ["admin@co.com"] }
  ],
  "timeout_minutes": 30,
  "timeout_action": "deny",
  "require_reason": true
}
Step 04

Audit Trails & Retention

Every decision—allowed, denied, or modified—is logged to a tamper-evident audit trail:


→ **Immutable** — Append-only log with cryptographic integrity checks

→ **Complete** — Full action context, policy matched, decision rationale

→ **Searchable** — Query by agent, resource, time range, decision type

→ **Exportable** — JSON, CSV, or direct integration with your SIEM


Retention periods are configurable based on compliance requirements. We support data residency requirements with regional storage options.

// Example audit log entry
{
  "timestamp": "2026-01-29T14:32:00Z",
  "action_id": "act_xyz789",
  "agent": "analyst-bot-1",
  "action": {
    "type": "file.read",
    "resource": "/data/reports/q4.csv"
  },
  "decision": "allow",
  "matched_rule": "default-allow",
  "risk_score": 0.12,
  "latency_ms": 2
}
Privacy

Your data, your control

We see intents, not content

Manyr evaluates action metadata (what, where, who) without accessing payload content. Your sensitive data never leaves your infrastructure.

Customer-owned policies

You define and control all policy rules. No hidden logic, no black-box decisions. Full transparency into why actions are allowed or denied.

Export everything

Your audit logs belong to you. Export anytime in standard formats (JSON, CSV) or stream directly to your SIEM/data warehouse.

Data residency options

For compliance requirements, we offer regional deployment options ensuring data stays within specified geographic boundaries.

SOC 2 designed

Our infrastructure and processes are designed with SOC 2 compliance in mind from day one. Audit reports available for enterprise customers.

No training on your data

We do not use customer data to train models. Your audit logs and action patterns are never used for ML training.

FAQ

Common questions

Why not build this in-house?

You could, but Manyr provides cross-agent, vendor-neutral governance that works with any agent framework. We offer a consistent control plane + audit trail that would take significant engineering effort to replicate. Plus, our hybrid evaluation engine is optimized for near-imperceptible latency.

What's the latency impact?

For most actions (90%+), evaluation completes in under 5ms using our deterministic rule engine. The LLM judge, when enabled for complex cases, adds ~200ms but is entirely optional and configurable.

Is this a feature or a company?

We believe agent governance is a platform-level problem that requires dedicated infrastructure. Our focus is cross-platform neutrality, local execution boundary enforcement, and compliance-grade audit trails—problems that need specialized, sustained attention.

What agent frameworks do you support?

We provide SDKs and adapters for LangChain, AutoGPT, CrewAI, and custom agents. We also support OpenAI function calling, Anthropic tool use, and the Model Context Protocol (MCP).

How do I get started?

We're currently working with design partners on pilot programs. Reach out through our contact form to discuss your use case and timeline.