Skip to content

LangChain ChatOpenAI Compatibility

This tutorial validates ReplayLab with a LangChain ChatOpenAI workflow routed through the OpenAI Responses API path. It is a compatibility scenario, not a native LangChain adapter.

The goal is to prove that provider calls made through LangChain can still be captured, replayed, compared, viewed, and converted into pytest provider replay guards through the normal ReplayLab workflow.

Why This Matters

Many teams use LangChain abstractions and never call OpenAI directly from business code. ReplayLab should still capture the provider boundary because instrumentation happens at the provider client layer, not at a framework-specific trace layer.

This scenario validates the ChatOpenAI(..., use_responses_api=True) path specifically.

Run The Scenario

Run:

python scripts/run_scenario.py run langchain-openai-local --keep-workspace

Expected ending:

ReplayLab scenario passed.
Scenario: langchain-openai-local
Tier: loopback
Boundaries: 1
Providers: openai

ReplayLab creates a clean temporary virtual environment, installs the current checkout plus langchain-openai, openai, and pytest, starts a loopback OpenAI Responses endpoint for capture, then stops the endpoint before replay and generated pytest.

App Shape

The generated scenario app uses startup instrumentation and a normal LangChain call:

import replaylab
from langchain_openai import ChatOpenAI
from replaylab import CapturePayloadPolicy

handle = replaylab.init(
    project_name="langchain-openai-local",
    auto_patch_integrations=("openai",),
    capture_payload_policy=CapturePayloadPolicy.FULL,
)

chat = ChatOpenAI(
    model="gpt-5-mini",
    base_url="http://127.0.0.1:.../v1",
    api_key="scenario-key",
    use_responses_api=True,
)

with handle.capture("langchain_chat_openai"):
    response = chat.invoke("Classify ticket 123 as low, medium, or high priority.")

The important part is that ReplayLab is initialized before LangChain constructs and uses the OpenAI client path. This scenario intentionally patches only openai so LangChain's internal httpx client type checks are left untouched.

What ReplayLab Captures

The scenario expects one full-payload OpenAI boundary:

provider=openai
resource=openai.responses
payload refs=request,response
integrations=openai,auto_patch,same_process

The scenario also validates local-app discoverability for this framework path: captured run visible, replay action available, generated guard attached to the captured run, and upload preview reachable.

What Is Not Yet Supported

  • LangChain paths that require OpenAI Chat Completions semantics.
  • Streaming LangChain model calls.
  • LangChain Anthropic/Claude adapter compatibility.
  • Framework-native semantic traces beyond provider boundaries.

Those remain future work. The current guarantee is provider-level capture/replay for the supported LangChain ChatOpenAI Responses path.