prefactor_langchain package
prefactor_langchain package
Section titled “prefactor_langchain package”Prefactor LangChain - LangChain integration for Prefactor observability.
class prefactor_langchain.AgentSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’, agent_name: str | None = None, agent_config: dict[str, ~typing.Any]=, initial_messages: list[dict[str, ~typing.Any]]=, final_messages: list[dict[str, ~typing.Any]]=, iteration_count: int = 0)
Section titled “class prefactor_langchain.AgentSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’, agent_name: str | None = None, agent_config: dict[str, ~typing.Any]=, initial_messages: list[dict[str, ~typing.Any]]=, final_messages: list[dict[str, ~typing.Any]]=, iteration_count: int = 0)”Bases: LangChainSpan
Span representing an agent execution.
Captures the lifecycle of an agent run, including the agent’s name, configuration, and the messages/state that drove the execution.
agent_config : dict[str, Any]
Section titled “agent_config : dict[str, Any]”agent_name : str | None = None
Section titled “agent_name : str | None = None”final_messages : list[dict[str, Any]]
Section titled “final_messages : list[dict[str, Any]]”initial_messages : list[dict[str, Any]]
Section titled “initial_messages : list[dict[str, Any]]”iteration_count : int = 0
Section titled “iteration_count : int = 0”to_dict() → dict[str, Any]
Section titled “to_dict() → dict[str, Any]”Convert to dictionary including agent-specific fields.
type : str = ‘langchain:agent’
Section titled “type : str = ‘langchain:agent’”class prefactor_langchain.ErrorInfo(error_type: str, message: str, stacktrace: str | None = None)
Section titled “class prefactor_langchain.ErrorInfo(error_type: str, message: str, stacktrace: str | None = None)”Bases: object
Error information for failed spans.
error_type : str
Section titled “error_type : str”message : str
Section titled “message : str”stacktrace : str | None = None
Section titled “stacktrace : str | None = None”to_dict() → dict[str, Any]
Section titled “to_dict() → dict[str, Any]”Convert to dictionary for serialization.
class prefactor_langchain.LLMSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’, model_name: str | None = None, provider: str | None = None, token_usage: TokenUsage | None = None, temperature: float | None = None, max_tokens: int | None = None, top_p: float | None = None, stop_sequences: list[str] = , messages: list[dict[str, ~typing.Any]]=, response_content: str | None = None)
Section titled “class prefactor_langchain.LLMSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’, model_name: str | None = None, provider: str | None = None, token_usage: TokenUsage | None = None, temperature: float | None = None, max_tokens: int | None = None, top_p: float | None = None, stop_sequences: list[str] = , messages: list[dict[str, ~typing.Any]]=, response_content: str | None = None)”Bases: LangChainSpan
Span representing an LLM call.
Captures model-specific metadata including the model name, provider, token usage, and generation parameters.
max_tokens : int | None = None
Section titled “max_tokens : int | None = None”messages : list[dict[str, Any]]
Section titled “messages : list[dict[str, Any]]”model_name : str | None = None
Section titled “model_name : str | None = None”provider : str | None = None
Section titled “provider : str | None = None”response_content : str | None = None
Section titled “response_content : str | None = None”stop_sequences : list[str]
Section titled “stop_sequences : list[str]”temperature : float | None = None
Section titled “temperature : float | None = None”to_dict() → dict[str, Any]
Section titled “to_dict() → dict[str, Any]”Convert to dictionary including LLM-specific fields.
token_usage : TokenUsage | None = None
Section titled “token_usage : TokenUsage | None = None”top_p : float | None = None
Section titled “top_p : float | None = None”type : str = ‘langchain:llm’
Section titled “type : str = ‘langchain:llm’”class prefactor_langchain.LangChainSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’)
Section titled “class prefactor_langchain.LangChainSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’)”Bases: object
Base class for all LangChain spans.
All LangChain spans share common fields for timing, status, inputs/outputs, and error information. Trace correlation (span_id, parent_span_id, trace_id) is handled by the backend.
Note: The ‘type’ field is defined in subclasses to avoid dataclass field shadowing issues.
complete(outputs: dict[str, Any] | None = None) → None
Section titled “complete(outputs: dict[str, Any] | None = None) → None”Mark the span as completed with outputs.
end_time : float | None = None
Section titled “end_time : float | None = None”error : ErrorInfo | None = None
Section titled “error : ErrorInfo | None = None”fail(error: Exception) → None
Section titled “fail(error: Exception) → None”Mark the span as failed with error information.
inputs : dict[str, Any]
Section titled “inputs : dict[str, Any]”metadata : dict[str, Any]
Section titled “metadata : dict[str, Any]”name : str = ‘unnamed’
Section titled “name : str = ‘unnamed’”outputs : dict[str, Any] | None = None
Section titled “outputs : dict[str, Any] | None = None”start_time : float
Section titled “start_time : float”status : Literal[‘pending’, ‘running’, ‘completed’, ‘failed’] = ‘pending’
Section titled “status : Literal[‘pending’, ‘running’, ‘completed’, ‘failed’] = ‘pending’”tags : list[str]
Section titled “tags : list[str]”to_dict() → dict[str, Any]
Section titled “to_dict() → dict[str, Any]”Convert span to dictionary for serialization.
Returns a JSON-serializable dictionary representation of the span.
type : str = ‘langchain:agent’
Section titled “type : str = ‘langchain:agent’”class prefactor_langchain.LangChainToolSchemaConfig(span_type: str, input_schema: dict[str, Any])
Section titled “class prefactor_langchain.LangChainToolSchemaConfig(span_type: str, input_schema: dict[str, Any])”Bases: object
Configuration for a tool-specific LangChain span schema.
span_type
Section titled “span_type”Tool-specific span type suffix or full span type. Values are
normalized to langchain:tool:<suffix>.
- Type: str
input_schema
Section titled “input_schema”JSON schema for the tool arguments stored in inputs for
tool-specific spans.
- Type: dict[str, Any]
input_schema : dict[str, Any]
Section titled “input_schema : dict[str, Any]”span_type : str
Section titled “span_type : str”class prefactor_langchain.PrefactorMiddleware(client: PrefactorCoreClient | None = None, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, instance: AgentInstanceHandle | None = None, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None)
Section titled “class prefactor_langchain.PrefactorMiddleware(client: PrefactorCoreClient | None = None, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, instance: AgentInstanceHandle | None = None, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None)”Bases: AgentMiddleware
LangChain middleware for automatic tracing.
This middleware integrates with LangChain’s middleware system to automatically create and emit spans for agent execution, LLM calls, and tool executions.
Three usage patterns are supported:
- Pre-configured Client (recommended): Pass a pre-configured client for full control over settings. The user is responsible for client lifecycle.
- Pre-configured Instance: Pass an existing AgentInstanceHandle to share
a single instance between the LangChain middleware and other parts of your
program. Use this when you need to create spans outside of the LangChain
agent (e.g. for custom pre/post-processing steps). The caller owns the
instance lifecycle and must call
instance.finish()themselves. - Factory Pattern: Use from_config() for quick setup. The middleware owns both client and agent instance lifecycle.
Example - Pre-configured Client:
: from prefactor_core import PrefactorCoreClient, PrefactorCoreConfig
from prefactor_http.config import HttpClientConfig
Configure and initialize client yourself
Section titled “Configure and initialize client yourself”http_config = HttpClientConfig(api_url=”…”, api_token=”…”)
config = PrefactorCoreConfig(http_config=http_config)
client = PrefactorCoreClient(config)
await client.initialize()
Create middleware with pre-configured client
Section titled “Create middleware with pre-configured client”middleware = PrefactorMiddleware(
client=client, agent_id=”my-agent”, agent_name=”My Agent”,
)
User must close both middleware and client
Section titled “User must close both middleware and client”await middleware.close() # Only closes agent instance await client.close() # User closes their own client
Example - Pre-configured Instance (spans outside the agent):
: from prefactor_core import PrefactorCoreClient, PrefactorCoreConfig
from prefactor_http.config import HttpClientConfig
http_config = HttpClientConfig(api_url=”…”, api_token=”…”)
config = PrefactorCoreConfig(http_config=http_config)
client = PrefactorCoreClient(config)
await client.initialize()
instance = await client.create_agent_instance(agent_id=”my-agent”)
await instance.start()
Share the same instance with the middleware AND your own code
Section titled “Share the same instance with the middleware AND your own code”middleware = PrefactorMiddleware(instance=instance)
Instrument your own code with the same instance
Section titled “Instrument your own code with the same instance”async with instance.span(“custom:preprocessing”) as ctx:
ctx.set_result({“step”: “preprocess”, “status”: “ok”})
Run your LangChain agent (middleware traces it automatically)
Section titled “Run your LangChain agent (middleware traces it automatically)”result = agent.invoke({“messages”: […]})
Caller is responsible for cleanup
Section titled “Caller is responsible for cleanup”await instance.finish() await client.close()
Example - Factory Pattern:
: middleware = PrefactorMiddleware.from_config(
: api_url=”https://api.prefactor.ai”,
api_token=”my-token”,
agent_id=”my-agent”,
agent_name=”My Agent”,
)
Middleware manages both client and agent instance
Section titled “Middleware manages both client and agent instance”await middleware.close() # Closes both
async aafter_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “async aafter_agent(state: Any, runtime: Any) → dict[str, Any] | None”Async hook called after agent completes execution.
Finishes the langchain:agent span opened by abefore_agent by
exiting its async context manager.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
async abefore_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “async abefore_agent(state: Any, runtime: Any) → dict[str, Any] | None”Async hook called before agent starts execution.
Creates a langchain:agent span using the async context manager so
that SpanContextStack is updated automatically. Any outer workflow
span already on the stack (e.g. workflow:agent_step) is picked up
as the parent without any manual set_parent_span() call.
The span context is kept open and stored in _agent_span_context
until aafter_agent exits it.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
after_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “after_agent(state: Any, runtime: Any) → dict[str, Any] | None”Hook called after agent completes execution.
Finishes the agent span created in before_agent.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
async awrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “async awrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap async model calls to trace LLM execution.
- Parameters:
- request – The model request.
- handler – The function that executes the model call.
- Returns: The model response.
async awrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “async awrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap async tool calls to trace tool execution.
- Parameters:
- request – The tool request.
- handler – The function that executes the tool call.
- Returns: The tool response.
before_agent(state: Any, runtime: Any) → dict[str, Any] | None
Section titled “before_agent(state: Any, runtime: Any) → dict[str, Any] | None”Hook called before agent starts execution.
Creates a root span for the entire agent execution.
- Parameters:
- state – The agent state.
- runtime – The runtime context.
- Returns: Optional state updates.
async close() → None
Section titled “async close() → None”Close the middleware and cleanup resources.
Awaits all in-flight span-emit tasks first, then closes the agent instance (if we created it) and finally the client.
async ensure_initialized() → AgentInstanceHandle
Section titled “async ensure_initialized() → AgentInstanceHandle”Initialize the middleware and return the agent instance handle.
classmethod from_config(api_url: str, api_token: str, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, schema_registry: SchemaRegistry | None = None, include_langchain_schemas: bool = True, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → PrefactorMiddleware
Section titled “classmethod from_config(api_url: str, api_token: str, agent_id: str = ‘langchain-agent’, agent_name: str | None = None, schema_registry: SchemaRegistry | None = None, include_langchain_schemas: bool = True, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → PrefactorMiddleware”Factory method to create middleware from configuration.
This creates a client and middleware with the specified settings. The middleware owns the client and will auto-initialize it on first use.
- Parameters:
- api_url – The Prefactor API URL.
- api_token – The API token for authentication.
- agent_id – Optional agent identifier for categorization.
- agent_name – Optional human-readable agent name.
- schema_registry – Optional SchemaRegistry for registering span schemas.
- include_langchain_schemas – If True and schema_registry is provided, automatically register LangChain-specific schemas.
- tool_schemas – Optional per-tool schema configuration for tool-specific span types.
- Returns: A configured PrefactorMiddleware instance with lazy initialization.
Example
Section titled “Example”middleware = PrefactorMiddleware.from_config( : api_url=”https://api.prefactor.ai”, api_token=”my-token”, agent_id=”my-agent”, agent_name=”My Agent”,
)
Middleware auto-initializes on first use
Section titled “Middleware auto-initializes on first use”Cleanup when done:
Section titled “Cleanup when done:”await middleware.close() # Closes both agent instance and client
set_parent_span(span_id: str | None) → None
Section titled “set_parent_span(span_id: str | None) → None”Set the parent span ID for the next agent invocation (sync path only).
Only needed when using agent.invoke() via run_in_executor.
In that case, before_agent runs in a worker thread where
contextvars are not inherited, so the parent span ID must be
passed explicitly before entering the executor.
When using agent.ainvoke() (the recommended async path), parent
wiring is automatic via SpanContextStack — do not call this method.
- Parameters: span_id – The span ID to use as the parent, or None to clear it.
wrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “wrap_model_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap synchronous model calls to trace LLM execution.
- Parameters:
- request – The model request.
- handler – The function that executes the model call.
- Returns: The model response.
wrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any
Section titled “wrap_tool_call(request: Any, handler: Callable[[Any], Any]) → Any”Wrap synchronous tool calls to trace tool execution.
- Parameters:
- request – The tool request.
- handler – The function that executes the tool call.
- Returns: The tool response.
class prefactor_langchain.TokenUsage(prompt_tokens: int, completion_tokens: int, total_tokens: int)
Section titled “class prefactor_langchain.TokenUsage(prompt_tokens: int, completion_tokens: int, total_tokens: int)”Bases: object
Token usage information for LLM calls.
completion_tokens : int
Section titled “completion_tokens : int”prompt_tokens : int
Section titled “prompt_tokens : int”to_dict() → dict[str, Any]
Section titled “to_dict() → dict[str, Any]”Convert to dictionary for serialization.
total_tokens : int
Section titled “total_tokens : int”class prefactor_langchain.ToolSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’, tool_name: str | None = None, tool_schema: dict[str, ~typing.Any] | None=None, arguments: dict[str, ~typing.Any]=, execution_time_ms: int | None = None, tool_type: str | None = None, retriever_metadata: dict[str, ~typing.Any] | None=None)
Section titled “class prefactor_langchain.ToolSpan(name: str = ‘unnamed’, start_time: float = , end_time: float | None = None, status: Literal[‘pending’, ‘running’, ‘completed’, ‘failed’]=‘pending’, inputs: dict[str, ~typing.Any]=, outputs: dict[str, ~typing.Any] | None=None, metadata: dict[str, ~typing.Any]=, tags: list[str] = , error: ErrorInfo | None = None, type: str = ‘langchain:agent’, tool_name: str | None = None, tool_schema: dict[str, ~typing.Any] | None=None, arguments: dict[str, ~typing.Any]=, execution_time_ms: int | None = None, tool_type: str | None = None, retriever_metadata: dict[str, ~typing.Any] | None=None)”Bases: LangChainSpan
Span representing a tool execution.
Captures tool-specific metadata including the tool name, schema, arguments, and execution time. Can represent any tool call including retrievers (with appropriate metadata).
arguments : dict[str, Any]
Section titled “arguments : dict[str, Any]”execution_time_ms : int | None = None
Section titled “execution_time_ms : int | None = None”retriever_metadata : dict[str, Any] | None = None
Section titled “retriever_metadata : dict[str, Any] | None = None”to_dict() → dict[str, Any]
Section titled “to_dict() → dict[str, Any]”Convert to dictionary including tool-specific fields.
tool_name : str | None = None
Section titled “tool_name : str | None = None”tool_schema : dict[str, Any] | None = None
Section titled “tool_schema : dict[str, Any] | None = None”tool_type : str | None = None
Section titled “tool_type : str | None = None”type : str = ‘langchain:tool’
Section titled “type : str = ‘langchain:tool’”prefactor_langchain.compile_langchain_agent_schema(agent_schema: Mapping[str, Any] | None = None, , tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → tuple[dict[str, Any], dict[str, str]]
Section titled “prefactor_langchain.compile_langchain_agent_schema(agent_schema: Mapping[str, Any] | None = None, , tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → tuple[dict[str, Any], dict[str, str]]”Compile a LangChain agent schema with optional tool-specific span types.
- Parameters:
- agent_schema – Optional base agent schema to merge with the built-in LangChain span schemas.
- tool_schemas – Optional Python-first per-tool schema configuration.
- Returns:
A tuple of
(compiled_agent_schema, tool_span_types)wheretool_span_typesmaps tool names to normalized span types.
prefactor_langchain.extract_error_info(error: Exception) → ErrorInfo
Section titled “prefactor_langchain.extract_error_info(error: Exception) → ErrorInfo”Extract error information from an exception.
- Parameters: error – The exception to extract information from.
- Returns: ErrorInfo containing error details.
prefactor_langchain.extract_token_usage(response: Any) → TokenUsage | None
Section titled “prefactor_langchain.extract_token_usage(response: Any) → TokenUsage | None”Extract token usage from a ModelResponse.
Checks each message in response.result for usage_metadata
(the standard LangChain field populated by all providers), accumulating
totals across messages.
- Parameters: response – A ModelResponse object from LangChain.
- Returns: TokenUsage if available, None otherwise.
prefactor_langchain.register_langchain_schemas(registry: Any, , agent_schema: Mapping[str, Any] | None = None, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → dict[str, str]
Section titled “prefactor_langchain.register_langchain_schemas(registry: Any, , agent_schema: Mapping[str, Any] | None = None, tool_schemas: Mapping[str, LangChainToolSchemaConfig | Mapping[str, Any]] | None = None) → dict[str, str]”Register all LangChain span schemas with a schema registry.
Registers the built-in schemas for LangChain-specific span types
(agent, llm, tool) using the full span_type_schemas form, which
includes params schemas, result schemas, titles, and descriptions. When
tool schemas are configured, this also registers per-tool span types.
- Parameters:
- registry – The SchemaRegistry to register schemas with.
- agent_schema – Optional base agent schema that may include embedded
toolSchemasortool_schemasconfig. - tool_schemas – Optional Python-first per-tool schema configuration.
- Returns: A dict mapping tool names to their normalized span types. Returns an empty dict when no tool-specific schemas are registered.
Example
Section titled “Example”from prefactor_core import SchemaRegistry from prefactor_langchain.schemas import register_langchain_schemas
registry = SchemaRegistry() register_langchain_schemas(registry)
Now the registry has langchain:agent, langchain:llm, langchain:tool
Section titled “Now the registry has langchain:agent, langchain:llm, langchain:tool”assert registry.has_schema(“langchain:llm”)
Submodules
Section titled “Submodules”- prefactor_langchain.metadata_extractor module
- prefactor_langchain.middleware module
PrefactorMiddlewarePrefactorMiddleware.aafter_agent()PrefactorMiddleware.abefore_agent()PrefactorMiddleware.after_agent()PrefactorMiddleware.awrap_model_call()PrefactorMiddleware.awrap_tool_call()PrefactorMiddleware.before_agent()PrefactorMiddleware.close()PrefactorMiddleware.ensure_initialized()PrefactorMiddleware.from_config()PrefactorMiddleware.set_parent_span()PrefactorMiddleware.wrap_model_call()PrefactorMiddleware.wrap_tool_call()
- prefactor_langchain.schemas module
- prefactor_langchain.spans module