prefactor_langchain.middleware module
prefactor_langchain.middleware module
Section titled “prefactor_langchain.middleware module”LangChain middleware for automatic tracing via prefactor-core.
class prefactor_langchain.middleware.PrefactorMiddleware(client: PrefactorCoreClient | None = None, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, instance: AgentInstanceHandle | None = None, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None)
Section titled “class prefactor_langchain.middleware.PrefactorMiddleware(client: PrefactorCoreClient | None = None, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, instance: AgentInstanceHandle | None = None, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None)”Bases: AgentMiddleware
LangChain middleware for automatic tracing.
This middleware integrates with LangChain’s middleware system to automatically create and emit spans for agent execution, LLM calls, and tool executions.
Three usage patterns are supported:
- Pre-configured Client (recommended): Pass a pre-configured client for full control over settings. The user is responsible for client lifecycle.
- Pre-configured Instance: Pass an existing AgentInstanceHandle to share
a single instance between the LangChain middleware and other parts of your
program. Use this when you need to create spans outside of the LangChain
agent (e.g. for custom pre/post-processing steps). The caller owns the
instance lifecycle and must call
instance.finish()themselves. - Factory Pattern: Use from_config() for quick setup. The middleware owns both client and agent instance lifecycle.
Example - Pre-configured Client:
: from prefactor_core import PrefactorCoreClient, PrefactorCoreConfig
from prefactor_http.config import HttpClientConfig
Configure and initialize client yourself
Section titled “Configure and initialize client yourself”http_config = HttpClientConfig(api_url=”…”, api_token=”…”)
config = PrefactorCoreConfig(http_config=http_config)
client = PrefactorCoreClient(config)
await client.initialize()
Create middleware with pre-configured client
Section titled “Create middleware with pre-configured client”middleware = PrefactorMiddleware(
client=client, agent_id=”my-agent”, agent_name=”My Agent”,
)
User must close both middleware and client
Section titled “User must close both middleware and client”await middleware.close() # Only closes agent instance await client.close() # User closes their own client
Example - Pre-configured Instance (spans outside the agent):
: from prefactor_core import PrefactorCoreClient, PrefactorCoreConfig
from prefactor_http.config import HttpClientConfig
http_config = HttpClientConfig(api_url=”…”, api_token=”…”)
config = PrefactorCoreConfig(http_config=http_config)
client = PrefactorCoreClient(config)
await client.initialize()
instance = await client.create_agent_instance(agent_id=”my-agent”)
await instance.start()
Share the same instance with the middleware AND your own code
Section titled “Share the same instance with the middleware AND your own code”middleware = PrefactorMiddleware(instance=instance)
Instrument your own code with the same instance
Section titled “Instrument your own code with the same instance”async with instance.span(“custom:preprocessing”) as ctx:
ctx.set_result({“step”: “preprocess”, “status”: “ok”})
Run your LangChain agent (middleware traces it automatically)
Section titled “Run your LangChain agent (middleware traces it automatically)”result = agent.invoke({“messages”: […]})
Caller is responsible for cleanup
Section titled “Caller is responsible for cleanup”await instance.finish() await client.close()
Example - Factory Pattern:
: middleware = PrefactorMiddleware.from_config(
: api_url=”https://api.prefactor.ai”,
api_token=”my-token”,
agent_id=”my-agent”,
agent_name=”My Agent”,
)
Middleware manages both client and agent instance
Section titled “Middleware manages both client and agent instance”await middleware.close() # Closes both
async aafter_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “async aafter_agent(state: Any, runtime: Any) → dict[str, Any] | None”Async hook called after agent completes execution.
Finishes the langchain:agent span opened by abefore_agent by
exiting its async context manager.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
async abefore_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “async abefore_agent(state: Any, runtime: Any) → dict[str, Any] | None”Async hook called before agent starts execution.
Creates a langchain:agent span using the async context manager so
that SpanContextStack is updated automatically. Any outer workflow
span already on the stack (e.g. workflow:agent_step) is picked up
as the parent without any manual set_parent_span() call.
The span context is kept open and stored in _agent_span_context
until aafter_agent exits it.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
after_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “after_agent(state: Any, runtime: Any) → dict[str, Any] | None”Hook called after agent completes execution.
Finishes the agent span created in before_agent.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
async awrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “async awrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap async model calls to trace LLM execution.
- Parameters:
- request – The model request.
- handler – The function that executes the model call.
- Returns: The model response.
async awrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “async awrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap async tool calls to trace tool execution.
- Parameters:
- request – The tool request.
- handler – The function that executes the tool call.
- Returns: The tool response.
before_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “before_agent(state: Any, runtime: Any) → dict[str, Any] | None”Hook called before agent starts execution.
Creates a root span for the entire agent execution.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
async close() → None
Section titled “async close() → None”Close the middleware and cleanup resources.
Awaits all in-flight span-emit tasks first, then closes the agent instance (if we created it) and finally the client.
async ensure_initialized() → AgentInstanceHandle
Section titled “async ensure_initialized() → AgentInstanceHandle”Initialize the middleware and return the agent instance handle.
classmethod from_config(api_url: str, api_token: str, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, schema_registry: SchemaRegistry | None = None, include_langchain_schemas: bool = True, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → PrefactorMiddleware
Section titled “classmethod from_config(api_url: str, api_token: str, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, schema_registry: SchemaRegistry | None = None, include_langchain_schemas: bool = True, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → PrefactorMiddleware”Factory method to create middleware from configuration.
This creates a client and middleware with the specified settings. The middleware owns the client and will auto-initialize it on first use.
- Parameters:
- api_url – The Prefactor API URL.
- api_token – The API token for authentication.
- agent_id – Optional agent identifier for categorization.
- agent_name – Optional human-readable agent name.
- schema_registry – Optional SchemaRegistry for registering span schemas.
- include_langchain_schemas – If True and schema_registry is provided, automatically register LangChain-specific schemas.
- tool_schemas – Optional per-tool schema configuration for tool-specific span types.
- Returns: A configured PrefactorMiddleware instance with lazy initialization.
Example
Section titled “Example”middleware = PrefactorMiddleware.from_config( : api_url=”https://api.prefactor.ai”, api_token=”my-token”, agent_id=”my-agent”, agent_name=”My Agent”,
)
Middleware auto-initializes on first use
Section titled “Middleware auto-initializes on first use”Cleanup when done:
Section titled “Cleanup when done:”await middleware.close() # Closes both agent instance and client
set_parent_span(span_id: str | None) → None
Section titled “set_parent_span(span_id: str | None) → None”Set the parent span ID for the next agent invocation (sync path only).
Only needed when using agent.invoke() via run_in_executor.
In that case, before_agent runs in a worker thread where
contextvars are not inherited, so the parent span ID must be
passed explicitly before entering the executor.
When using agent.ainvoke() (the recommended async path), parent
wiring is automatic via SpanContextStack — do not call this method.
- Parameters: span_id – The span ID to use as the parent, or None to clear it.
wrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “wrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap synchronous model calls to trace LLM execution.
- Parameters:
- request – The model request.
- handler – The function that executes the model call.
- Returns: The model response.
wrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “wrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap synchronous tool calls to trace tool execution.
- Parameters:
- request – The tool request.
- handler – The function that executes the tool call.
- Returns: The tool response.