FAQ
Common questions.
How is this different from observability tools I already pay for?
Datadog and Honeycomb watch your traffic and tell you when
latency or error rates spike. They can't replay a captured
request through your agent under chaos to test whether a fix
actually fixes anything. Tool Pouch captures the same telemetry
(and pipes to those tools) but adds the replay primitive on
top of it.
Does the wrap slow my requests down?
The proxy hot path is a non-blocking queue put. Sub-millisecond
p99 in our perf suite. All serialization, redaction,
truncation, and destination IO happen on the writer thread,
off the request path entirely.
What happens if a destination crashes?
The writer logs to stderr and continues. A misbehaving sink
cannot stop captures from flowing or affect your application.
Fail-open is the contract.
How does PII redaction work?
tool_pouch.redact.builtin() ships with regex for
emails, phones, SSNs, credit cards, IPs, OpenAI/Anthropic
keys, AWS/GitHub tokens, and bearer tokens. Pass
extra_patterns=[r"acct_\d{6}"] to extend, or pass
any callable as redact= to fully customize.
Defaults to redaction at capture time so PII never enters the
queue.
Can I disable wrap in CI or tests?
Set TOOL_POUCH_DISABLE_WRAP=1 and every
wrap_openai / wrap_anthropic call
becomes a no-op passthrough.
Which Python versions are supported?
Python 3.10, 3.11, 3.12, and 3.13. CI matrix runs against the
pinned and latest OpenAI & Anthropic SDKs.
What about MCP, LangGraph, or my custom orchestrator?
For pre-deploy testing, tool-pouch run takes any
agent that exposes agent_fn + real_tool_fn
+ tools + test_inputs. For
production capture, point your underlying OpenAI/Anthropic
client through wrap_openai / wrap_anthropic.
The orchestration layer doesn't matter.