The Authority Plane for AI Agents

Altrace governs what your agents are authorized to do — and enforces it. At the network layer in Kubernetes. At the credential layer everywhere else. Every decision recorded, every delegation contract enforced, every violation surfaced immediately.

Two Enforcement Layers, One Control Plane

Kubernetes
Agent Process
↓ all traffic forced
Altrace Sidecar
↓ governed
LLM Provider
Same control plane
All Environments
Agent (any env)
↓ uses virtual key
Credential Layer
↓ governed
LLM Provider
Kill Switch
Delegation Contracts
Audit Trail
Budget Controls

In Kubernetes, agents have no route around governance — traffic is forced through the sidecar at the network level. In all environments, credential-based enforcement governs LLM access. Both paths feed the same control plane: same kill switch, same audit trail, same delegation contracts.

Bounded-latency kill switch

In Kubernetes deployments, the kill switch blocks new LLM requests within 1 millisecond and terminates active tunnels within 1.2 seconds. Kill state persists through restarts and power loss. Maximum cost overrun is bounded to $100 regardless of agent behavior.

The kill switch operates at three granularities: global (all agents), team (all agents in a team), and individual agent. Graduated enforcement escalates automatically through five levels: warning, throttle, quarantine, block, and kill.

In Kubernetes: network-level enforcement, agents cannot bypass. In Docker/advisory mode: best-effort enforcement via proxy routing.
altrace — enforcement.log
14:23:00 WARN worker-3 budget=80% level=WARNING
14:23:12 WARN worker-3 budget=100% level=SOFT_LIMIT
14:23:18 BLOCK worker-3 budget=120% level=HARD_LIMIT
14:23:18 KILL scope=agent persist=true
14:23:18 INFO new_requests=blocked tunnels=draining

Authority that can only attenuate, never escalate

When an orchestrator delegates to a worker agent, Altrace enforces that the worker's authority is a strict subset of what it was granted. Budget limits, model access, tool permissions, data classification, time windows, rate limits, and geographic constraints — all governed by cryptographically signed delegation contracts.

This isn't policy-enforced. It's mathematically enforced. Authority attenuation makes escalation structurally impossible, not just prohibited.

Content-blind: Altrace never reads the content of prompts or responses. Only boolean governance labels flow through the system — your data stays with your agents.
Operator
gpt-4o claude-sonnet $500/day tools: all PII: allowed
delegates ↓ authority attenuated
Orchestrator
gpt-4o claude-sonnet $200/day tools: search, db-read PII: allowed
delegates ↓ authority attenuated
Worker Agent
gpt-4o claude-sonnet $50/day tools: search PII: allowed

Every decision. Immutable. Attributable.

Altrace records every enforcement decision in a tamper-evident audit log. Request ID, agent identity, the specific stage in the 14-stage decision chain that produced the result, delegation contract reference, and machine-readable reason code — all signed and stored.

When an auditor asks what happened, you don't search through prompt logs. You have causal records: which agent, what it tried to do, which governance rule was evaluated, and the outcome — with full attribution chain.

EU AI Act Art. 52 requires transparency in automated decisions. Altrace produces machine-readable decision reasons for every governance action.
altrace — audit-trail.log
14:22:58 ALLOW research-1
  model=gpt-4o cost=$0.032
  chain=14/14 passed
  delegation=contract-7f3a
14:23:00 BLOCK worker-3
  model=claude-opus
  stage=model_filter (3/14)
  reason=model_not_in_delegation_allowlist
  delegation=contract-2b91

Works with your stack

Anthropic
OpenAI
Kubernetes
Docker

Deploy as a sidecar in Kubernetes. Connect via credentials anywhere. No SDK changes. No agent code modifications required.

See it in action

Request early access and we'll walk you through a live demo.