prefactor-langchain
prefactor-langchain
Section titled “prefactor-langchain”LangChain integration for Prefactor observability. This package provides automatic tracing for LangChain agents using LangChain-specific span types.
Installation
Section titled “Installation”pip install prefactor-langchainFactory pattern (quickest setup)
Section titled “Factory pattern (quickest setup)”from prefactor_langchain import LangChainToolSchemaConfig, PrefactorMiddleware
middleware = PrefactorMiddleware.from_config( api_url="https://api.prefactor.ai", api_token="your-api-token", agent_id="my-agent", agent_name="My Agent", # optional tool_schemas={ "send_email": LangChainToolSchemaConfig( span_type="send-email", input_schema={ "type": "object", "properties": { "to": {"type": "string", "format": "email"}, "subject": {"type": "string"}, }, "required": ["to", "subject"], }, ) },)
# Use with LangChain's create_agent()# Your agent will automatically create spans for:# - Agent execution (langchain:agent)# - LLM calls (langchain:llm)# - Tool executions (langchain:tool)# - Tool-specific executions (for example langchain:tool:send-email)
result = agent.invoke({"messages": [...]})
# Middleware owns both client and instance; close when doneawait middleware.close()Pre-configured client
Section titled “Pre-configured client”Pass a client you created yourself when you need full control over its configuration or when you want to share a client across multiple middlewares.
from prefactor_core import PrefactorCoreClient, PrefactorCoreConfigfrom prefactor_http.config import HttpClientConfigfrom prefactor_langchain import PrefactorMiddleware
http_config = HttpClientConfig(api_url="https://api.prefactor.ai", api_token="your-api-token")config = PrefactorCoreConfig(http_config=http_config)client = PrefactorCoreClient(config)await client.initialize()
middleware = PrefactorMiddleware( client=client, agent_id="my-agent", agent_name="My Agent",)
result = agent.invoke({"messages": [...]})
# You own the client; close both separatelyawait middleware.close() # closes the agent instance onlyawait client.close()SchemaRegistry composition
Section titled “SchemaRegistry composition”Use a shared SchemaRegistry when you want custom workflow span types and
LangChain tool schemas to be published together.
from prefactor_core import SchemaRegistryfrom prefactor_langchain import ( LangChainToolSchemaConfig, PrefactorMiddleware, register_langchain_schemas,)
registry = SchemaRegistry()registry.register_type( name="workflow:run", params_schema={"type": "object"}, result_schema={"type": "object"},)register_langchain_schemas( registry, tool_schemas={ "send_email": LangChainToolSchemaConfig( span_type="send-email", input_schema={ "type": "object", "properties": {"to": {"type": "string"}}, "required": ["to"], }, ) },)
middleware = PrefactorMiddleware.from_config( api_url="https://api.prefactor.ai", api_token="your-api-token", agent_id="my-agent", schema_registry=registry,)Pre-configured instance (spans outside the agent)
Section titled “Pre-configured instance (spans outside the agent)”Pass an AgentInstanceHandle you created yourself when you also need to
instrument code that lives outside the LangChain agent — for example,
pre-processing steps, post-processing, or any custom business logic that
should appear as siblings of the agent spans in the same trace.
from prefactor_core import PrefactorCoreClient, PrefactorCoreConfigfrom prefactor_http.config import HttpClientConfigfrom prefactor_langchain import PrefactorMiddleware
http_config = HttpClientConfig(api_url="https://api.prefactor.ai", api_token="your-api-token")config = PrefactorCoreConfig(http_config=http_config)client = PrefactorCoreClient(config)await client.initialize()
instance = await client.create_agent_instance(agent_id="my-agent")await instance.start()
# Share the instance with the middlewaremiddleware = PrefactorMiddleware(instance=instance)
# Instrument your own code using the same instanceasync with instance.span("custom:preprocessing") as ctx: ctx.set_payload({"step": "preprocess", "status": "ok"})
# Run your agent — the middleware traces it automatically under the same instanceresult = agent.invoke({"messages": [...]})
async with instance.span("custom:postprocessing") as ctx: ctx.set_payload({"step": "postprocess", "result": str(result)})
# You own the instance and client; clean them up yourselfawait instance.finish()await client.close()If you pass tool_schemas=... with a pre-created instance, the middleware
uses those mappings to emit the right tool span types at runtime. The instance’s
already-registered schema version is not mutated, so you must register matching
tool schemas before creating the instance if you want those per-tool span types
to appear in the backend schema version.
Span Types
Section titled “Span Types”This package creates LangChain-specific spans with the langchain:* namespace:
langchain:agent- Agent executions and chain runslangchain:llm- LLM calls with model metadata (name, provider, token usage)langchain:tool- Tool executions including retrievers
Each span payload includes:
- Timing information (start_time, end_time)
- Inputs and outputs
- Error information with stack traces
- LangChain-specific metadata
Trace correlation (span_id, parent_span_id, trace_id) is handled automatically by the prefactor-core client.
Features
Section titled “Features”- Automatic LLM call tracing - Captures model name, provider, token usage, temperature
- Tool execution tracing - Records tool name, arguments, execution time
- Agent/chain tracing - Tracks agent lifecycle and message history
- Token usage capture - Automatically extracts prompt/completion/total tokens
- Error tracking - Captures error type, message, and stack traces
- Automatic parent-child relationships - Uses SpanContextStack for hierarchy
- Bring your own instance - Share a single
AgentInstanceHandlebetween the middleware and your own instrumentation
Architecture
Section titled “Architecture”This package follows the LangChain Adapter Redesign principles:
- Package Isolation: LangChain-specific span types and schemas live in this package
- Opaque Payloads: Span data is sent as payload to prefactor-core
- Type Namespacing: Uses
langchain:agent,langchain:llm,langchain:toolprefixes - Uses prefactor-core: All span/instance management via the prefactor-core client
The middleware:
- Accepts a
PrefactorCoreClient, or a pre-createdAgentInstanceHandle, or creates its own client viafrom_config() - Registers or borrows an agent instance
- Creates spans with LangChain-specific payloads
- Leverages
SpanContextStackfor automatic parent detection
Development
Section titled “Development”Run tests:
pytest tests/License
Section titled “License”MIT