Skip to content

prefactor-langchain

LangChain integration for Prefactor observability. This package provides automatic tracing for LangChain agents using LangChain-specific span types.

Terminal window
pip install prefactor-langchain
from prefactor_langchain import LangChainToolSchemaConfig, PrefactorMiddleware
middleware = PrefactorMiddleware.from_config(
api_url="https://api.prefactor.ai",
api_token="your-api-token",
agent_id="my-agent",
agent_name="My Agent", # optional
tool_schemas={
"send_email": LangChainToolSchemaConfig(
span_type="send-email",
input_schema={
"type": "object",
"properties": {
"to": {"type": "string", "format": "email"},
"subject": {"type": "string"},
},
"required": ["to", "subject"],
},
)
},
)
# Use with LangChain's create_agent()
# Your agent will automatically create spans for:
# - Agent execution (langchain:agent)
# - LLM calls (langchain:llm)
# - Tool executions (langchain:tool)
# - Tool-specific executions (for example langchain:tool:send-email)
result = agent.invoke({"messages": [...]})
# Middleware owns both client and instance; close when done
await middleware.close()

Pass a client you created yourself when you need full control over its configuration or when you want to share a client across multiple middlewares.

from prefactor_core import PrefactorCoreClient, PrefactorCoreConfig
from prefactor_http.config import HttpClientConfig
from prefactor_langchain import PrefactorMiddleware
http_config = HttpClientConfig(api_url="https://api.prefactor.ai", api_token="your-api-token")
config = PrefactorCoreConfig(http_config=http_config)
client = PrefactorCoreClient(config)
await client.initialize()
middleware = PrefactorMiddleware(
client=client,
agent_id="my-agent",
agent_name="My Agent",
)
result = agent.invoke({"messages": [...]})
# You own the client; close both separately
await middleware.close() # closes the agent instance only
await client.close()

Use a shared SchemaRegistry when you want custom workflow span types and LangChain tool schemas to be published together.

from prefactor_core import SchemaRegistry
from prefactor_langchain import (
LangChainToolSchemaConfig,
PrefactorMiddleware,
register_langchain_schemas,
)
registry = SchemaRegistry()
registry.register_type(
name="workflow:run",
params_schema={"type": "object"},
result_schema={"type": "object"},
)
register_langchain_schemas(
registry,
tool_schemas={
"send_email": LangChainToolSchemaConfig(
span_type="send-email",
input_schema={
"type": "object",
"properties": {"to": {"type": "string"}},
"required": ["to"],
},
)
},
)
middleware = PrefactorMiddleware.from_config(
api_url="https://api.prefactor.ai",
api_token="your-api-token",
agent_id="my-agent",
schema_registry=registry,
)

Pre-configured instance (spans outside the agent)

Section titled “Pre-configured instance (spans outside the agent)”

Pass an AgentInstanceHandle you created yourself when you also need to instrument code that lives outside the LangChain agent — for example, pre-processing steps, post-processing, or any custom business logic that should appear as siblings of the agent spans in the same trace.

from prefactor_core import PrefactorCoreClient, PrefactorCoreConfig
from prefactor_http.config import HttpClientConfig
from prefactor_langchain import PrefactorMiddleware
http_config = HttpClientConfig(api_url="https://api.prefactor.ai", api_token="your-api-token")
config = PrefactorCoreConfig(http_config=http_config)
client = PrefactorCoreClient(config)
await client.initialize()
instance = await client.create_agent_instance(agent_id="my-agent")
await instance.start()
# Share the instance with the middleware
middleware = PrefactorMiddleware(instance=instance)
# Instrument your own code using the same instance
async with instance.span("custom:preprocessing") as ctx:
ctx.set_payload({"step": "preprocess", "status": "ok"})
# Run your agent — the middleware traces it automatically under the same instance
result = agent.invoke({"messages": [...]})
async with instance.span("custom:postprocessing") as ctx:
ctx.set_payload({"step": "postprocess", "result": str(result)})
# You own the instance and client; clean them up yourself
await instance.finish()
await client.close()

If you pass tool_schemas=... with a pre-created instance, the middleware uses those mappings to emit the right tool span types at runtime. The instance’s already-registered schema version is not mutated, so you must register matching tool schemas before creating the instance if you want those per-tool span types to appear in the backend schema version.

This package creates LangChain-specific spans with the langchain:* namespace:

  • langchain:agent - Agent executions and chain runs
  • langchain:llm - LLM calls with model metadata (name, provider, token usage)
  • langchain:tool - Tool executions including retrievers

Each span payload includes:

  • Timing information (start_time, end_time)
  • Inputs and outputs
  • Error information with stack traces
  • LangChain-specific metadata

Trace correlation (span_id, parent_span_id, trace_id) is handled automatically by the prefactor-core client.

  • Automatic LLM call tracing - Captures model name, provider, token usage, temperature
  • Tool execution tracing - Records tool name, arguments, execution time
  • Agent/chain tracing - Tracks agent lifecycle and message history
  • Token usage capture - Automatically extracts prompt/completion/total tokens
  • Error tracking - Captures error type, message, and stack traces
  • Automatic parent-child relationships - Uses SpanContextStack for hierarchy
  • Bring your own instance - Share a single AgentInstanceHandle between the middleware and your own instrumentation

This package follows the LangChain Adapter Redesign principles:

  1. Package Isolation: LangChain-specific span types and schemas live in this package
  2. Opaque Payloads: Span data is sent as payload to prefactor-core
  3. Type Namespacing: Uses langchain:agent, langchain:llm, langchain:tool prefixes
  4. Uses prefactor-core: All span/instance management via the prefactor-core client

The middleware:

  1. Accepts a PrefactorCoreClient, or a pre-created AgentInstanceHandle, or creates its own client via from_config()
  2. Registers or borrows an agent instance
  3. Creates spans with LangChain-specific payloads
  4. Leverages SpanContextStack for automatic parent detection

Run tests:

Terminal window
pytest tests/

MIT