PydanticAI Compatibility
This tutorial validates ReplayLab with a PydanticAI Agent using the OpenAI Responses model path.
It is a compatibility scenario, not a native PydanticAI adapter.
The goal is to prove that the normal ReplayLab integration model still works when an agent framework owns the model call:
- initialize ReplayLab once near startup;
- keep the PydanticAI model and agent code normal;
- wrap the agent invocation in one
handle.capture(...)scope; - replay, compare, export the local viewer, generate pytest, and run the generated test without a live provider.
Why This Matters
Many agent applications do not call OpenAI directly from business code. They call a framework, and the framework calls the provider. ReplayLab should still capture the provider boundary because instrumentation happens at the provider client layer, not at a framework-specific tracing layer.
This scenario follows the documented PydanticAI OpenAI integration shape: pydantic-ai-slim[openai],
OpenAIProvider(openai_client=...), and OpenAIResponsesModel.
Run The Scenario
Run:
python scripts/run_scenario.py run pydantic-ai-local --keep-workspace
Expected ending:
ReplayLab scenario passed.
Scenario: pydantic-ai-local
Tier: loopback
Boundaries: 1
Providers: openai
ReplayLab creates a clean temporary virtual environment, installs the current checkout plus
pydantic-ai-slim[openai], openai, and pytest, starts a fake OpenAI Responses endpoint only for
capture, then stops the endpoint before replay and generated pytest.
App Shape
The generated scenario app uses startup instrumentation and a normal PydanticAI agent call:
import openai
import replaylab
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIResponsesModel
from pydantic_ai.providers.openai import OpenAIProvider
from replaylab import CapturePayloadPolicy
handle = replaylab.init(
project_name="pydantic-ai-local",
auto_patch_integrations="auto",
capture_payload_policy=CapturePayloadPolicy.FULL,
)
client = openai.AsyncOpenAI(base_url="http://127.0.0.1:...", api_key="scenario-key")
model = OpenAIResponsesModel(
"gpt-5-mini",
provider=OpenAIProvider(openai_client=client),
)
agent = Agent(model, system_prompt="Return a terse triage label.")
with handle.capture("pydantic_ai_agent"):
result = agent.run_sync("Classify ticket 123 as low, medium, or high priority.")
The important part is that ReplayLab is initialized before the framework builds and uses the OpenAI client. Provider code can stay inside PydanticAI.
What ReplayLab Captures
The scenario expects one full-payload OpenAI boundary:
provider=openai
resource=openai.responses
payload refs=request,response
integrations=openai,auto_patch,same_process
The React viewer export should show the OpenAI boundary and the pydantic-ai scenario metadata.
It must not include API keys, raw payload bodies, or secret-looking strings.
What Is Not Yet Supported
- PydanticAI streaming paths.
- Chat Completions model paths.
- PydanticAI-native semantic trace graphs.
- Automatic instrumentation planning or code patching.
Those remain future work. The current guarantee is provider-level capture/replay for the supported OpenAI Responses boundary inside PydanticAI.