Skip to content

Tool Resolution

ReplayLab can now inspect local Python source for likely implementations of provider-visible model tools. This is a framework-neutral evidence layer, not a tool execution interposer.

What ReplayLab Resolves

When a captured OpenAI Responses request declares model tools, ReplayLab records the provider-facing tool name and safe top-level parameter names. If the local app root can be recovered from run metadata, the replay safety preflight parses Python files under that root with ast and looks for likely matching callables.

The resolver considers:

  • normalized callable names
  • source path and module names
  • callable signature parameters
  • limited docstring overlap with the provider tool description

ReplayLab does not import or execute application code during resolution.

What A Candidate Means

A tool implementation candidate is an advisory code-location match. It can help you find where a provider protocol tool request likely crossed into application code.

It does not mean ReplayLab captured the callable, controlled its side effects, saw its return value, or enforced a policy around downstream HTTP, database, file, queue, or subprocess work.

Current Safety Boundary

When captured HTTP effects include sanitized stack origins, ReplayLab can build an advisory tool effect map from the provider-visible model tool to a candidate callable and an observed HTTP effect. The map uses source and qualified-name evidence only. It does not inspect raw provider schemas, headers, payloads, source text, locals, arguments, return values, or absolute paths.

Safe workflow regression is still unavailable when model tools exist but execution tools and I/O are not captured and enforceable. Generated pytest files remain provider replay guards. Future policy enforcement must prove actual execution behavior before ReplayLab can promote a workflow to safe workflow regression.