Upgrade paths
Pick your SDK to follow the relevant migration steps.
Python SDK v2 → v3
The Python SDK v3 introduces significant improvements and changes compared to the legacy v2 SDK. It is not fully backward compatible. This comprehensive guide will help you migrate based on your current integration.
You can find a snapshot of the v2 SDK documentation here.
Core Changes to SDK v2:
- OpenTelemetry Foundation: v3 is built on OpenTelemetry standards
- Trace Input/Output: Now derived from root observation by default
- Trace Attributes (
user_id,session_id, etc.) Can be set via enclosing spans OR directly on integrations using metadata fields (OpenAI call, Langchain invocation) - Context Management: Automatic OTEL context propagation
Migration Path by Integration Type
@observe Decorator Users
v2 Pattern:
from langfuse.decorators import langfuse_context, observe
@observe()
def my_function():
# This was the trace
langfuse_context.update_current_trace(user_id="user_123")
return "result"v3 Migration:
from langfuse import observe, get_client # new import
@observe()
def my_function():
# This is now the root span, not the trace
langfuse = get_client()
# Update trace explicitly
langfuse.update_current_trace(user_id="user_123")
return "result"OpenAI Integration
v2 Pattern:
from langfuse.openai import openai
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
# Trace attributes directly on the call
user_id="user_123",
session_id="session_456",
tags=["chat"],
metadata={"source": "app"}
)v3 Migration:
If you do not set additional trace attributes, no changes are needed.
If you set additional trace attributes, you have two options:
Option 1: Use metadata fields (simplest migration):
from langfuse.openai import openai
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
metadata={
"langfuse_user_id": "user_123",
"langfuse_session_id": "session_456",
"langfuse_tags": ["chat"],
"source": "app" # Regular metadata still works
}
)Option 2: Use enclosing span (for more control):
from langfuse import get_client, propagate_attributes
from langfuse.openai import openai
langfuse = get_client()
with langfuse.start_as_current_observation(as_type="span", name="chat-request") as span:
with propagate_attributes(
user_id="user_123",
session_id="session_456",
tags=["chat"],
):
response = openai.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
metadata={"source": "app"}
)
# Set trace input and output explicitly
span.update_trace(
output={"response": response.choices[0].message.content},
input={"query": "Hello"},
)LangChain Integration
v2 Pattern:
from langfuse.callback import CallbackHandler
handler = CallbackHandler(
user_id="user_123",
session_id="session_456",
tags=["langchain"]
)
response = chain.invoke({"input": "Hello"}, config={"callbacks": [handler]})v3 Migration:
You have two options for setting trace attributes:
Option 1: Use metadata fields in chain invocation (simplest migration):
from langfuse.langchain import CallbackHandler
handler = CallbackHandler()
response = chain.invoke(
{"input": "Hello"},
config={
"callbacks": [handler],
"metadata": {
"langfuse_user_id": "user_123",
"langfuse_session_id": "session_456",
"langfuse_tags": ["langchain"]
}
}
)Option 2: Use enclosing span (for more control):
from langfuse import get_client, propagate_attributes
from langfuse.langchain import CallbackHandler
langfuse = get_client()
with langfuse.start_as_current_observation(as_type="span", name="langchain-request") as span:
with propagate_attributes(
user_id="user_123",
session_id="session_456",
tags=["langchain"],
):
handler = CallbackHandler()
response = chain.invoke({"input": "Hello"}, config={"callbacks": [handler]})
# Set trace input and output explicitly
span.update_trace(
input={"query": "Hello"},
output={"response": response}
)LlamaIndex Integration Users
v2 Pattern:
from langfuse.llama_index import LlamaIndexCallbackHandler
handler = LlamaIndexCallbackHandler()
Settings.callback_manager = CallbackManager([handler])
response = index.as_query_engine().query("Hello")v3 Migration:
from langfuse import get_client, propagate_attributes
from openinference.instrumentation.llama_index import LlamaIndexInstrumentor
# Use third-party OTEL instrumentation
LlamaIndexInstrumentor().instrument()
langfuse = get_client()
with langfuse.start_as_current_observation(as_type="span", name="llamaindex-query") as span:
with propagate_attributes(
user_id="user_123",
):
response = index.as_query_engine().query("Hello")
span.update_trace(
input={"query": "Hello"},
output={"response": str(response)}
)Low-Level SDK Users
v2 Pattern:
from langfuse import Langfuse
langfuse = Langfuse()
trace = langfuse.trace(
name="my-trace",
user_id="user_123",
input={"query": "Hello"}
)
generation = trace.generation(
name="llm-call",
model="gpt-4o"
)
generation.end(output="Response")v3 Migration:
In v3, all spans / generations must be ended by calling .end() on the returned object.
from langfuse import get_client, propagate_attributes
langfuse = get_client()
# Use context managers instead of manual objects
with langfuse.start_as_current_observation(
as_type="span",
name="my-trace",
input={"query": "Hello"} # Becomes trace input automatically
) as root_span:
# Propagate trace attributes to all child observations
with propagate_attributes(
user_id="user_123",
):
with langfuse.start_as_current_observation(
as_type="generation",
name="llm-call",
model="gpt-4o"
) as generation:
generation.update(output="Response")
# If needed, override trace output
root_span.update_trace(
input={"query": "Hello"},
output={"response": "Response"}
)Key Migration Checklist
-
Update Imports:
- Use
from langfuse import get_clientto access global client instance configured via environment variables - Use
from langfuse import Langfuseto create a new client instance configured via constructor parameters - Use
from langfuse import observeto import the observe decorator - Update integration imports:
from langfuse.langchain import CallbackHandler
- Use
-
Trace Attributes Pattern:
- Option 1: Use metadata fields (
langfuse_user_id,langfuse_session_id,langfuse_tags) directly in integration calls - Option 2: Move
user_id,session_id,tagstopropagate_attributes()
- Option 1: Use metadata fields (
-
Trace Input/Output:
- Critical for LLM-as-a-judge: Explicitly set trace input/output
- Don’t rely on automatic derivation from root observation if you need specific values
-
Context Managers:
- Replace manual
langfuse.trace(),trace.span()with context managers if you want to use them - Use
with langfuse.start_as_current_observation()instead
- Replace manual
-
LlamaIndex Migration:
- Replace Langfuse callback with third-party OTEL instrumentation
- Install:
pip install openinference-instrumentation-llama-index
-
ID Management:
- No Custom Observation IDs: v3 uses W3C Trace Context standard - you cannot set custom observation IDs
- Trace ID Format: Must be 32-character lowercase hexadecimal (16 bytes)
- External ID Correlation: Use
Langfuse.create_trace_id(seed=external_id)to generate deterministic trace IDs from external systems
from langfuse import Langfuse, observe # v3: Generate deterministic trace ID from external system external_request_id = "req_12345" trace_id = Langfuse.create_trace_id(seed=external_request_id) @observe(langfuse_trace_id=trace_id) def my_function(): # This trace will have the deterministic ID pass -
Initialization:
- Replace constructor parameters:
enabled→tracing_enabledthreads→media_upload_thread_count
- Replace constructor parameters:
-
Datasets
The link method on the dataset item objects has been replaced by a context manager that can be accessed via the run method on the dataset items. This is a higher level abstraction that manages trace creation and linking of the dataset item with the resulting trace.
See the datasets documentation for more details.
Detailed Change Summary
-
Core Change: OpenTelemetry Foundation
- Built on OpenTelemetry standards for better ecosystem compatibility
-
Trace Input/Output Behavior
- v2: Integrations could set trace input/output directly
- v3: Trace input/output derived from root observation by default
- Migration: Explicitly set via
span.update_trace(input=..., output=...)
-
Trace Attributes Location
- v2: Could be set directly on integration calls
- v3: Must be set on enclosing spans
- Migration: Wrap integration calls with
langfuse.start_as_current_observation()
-
Creating Observations:
- v2:
langfuse.trace(),langfuse.span(),langfuse.generation() - v3:
langfuse.start_as_current_observation() - Migration: Use context managers, ensure
.end()is called or usewithstatements
- v2:
-
IDs and Context:
- v3: W3C Trace Context format, automatic context propagation
- Migration: Use
langfuse.get_current_trace_id()instead ofget_trace_id()
-
Event Size Limitations:
- v2: Events were limited to 1MB in size
- v3: No size limits enforced on the SDK-side for events
Future support for v2
We will continue to support the v2 SDK for the foreseeable future with critical bug fixes and security patches. We will not be adding any new features to the v2 SDK. You can find a snapshot of the v2 SDK documentation here.
JS/TS SDK v3 → v4
Please follow each section below to upgrade your application from v3 to v4.
If you encounter any questions or issues while upgrading, please raise an issue on GitHub.
Initialization
The Langfuse base URL environment variable is now LANGFUSE_BASE_URL and no longer LANGFUSE_BASEURL. For backward compatibility however, the latter will still work in v4 but not in future versions.
Tracing
The v4 SDK tracing is a major rewrite based on OpenTelemetry and introduces several breaking changes.
- OTEL-based Architecture: The SDK is now built on top of OpenTelemetry. An OpenTelemetry Setup is required now and done by registering the
LangfuseSpanProcessorwith an OpenTelemetryNodeSDK. - New Tracing Functions: The
langfuse.trace(),langfuse.span(), andlangfuse.generation()methods have been replaced bystartObservation,startActiveObservation, etc., from the@langfuse/tracingpackage. - Separation of Concerns:
- The
@langfuse/tracingand@langfuse/otelpackages are for tracing. - The
@langfuse/clientpackage and theLangfuseClientclass are now only for non-tracing features like scoring, prompt management, and datasets.
- The
See the SDK v4 docs for details on each.
Prompt Management
-
Import: The import of the Langfuse client is now:
import { LangfuseClient } from "@langfuse/client"; -
Usage: The usage of the Langfuse client is now:
const langfuse = new LangfuseClient(); const prompt = await langfuse.prompt.get("my-prompt"); const compiledPrompt = prompt.compile({ topic: "developers" }); const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: compiledPrompt }], }); -
versionis now an optional property of the options object oflangfuse.prompt.get()instead of a positional argument.const prompt = await langfuse.prompt.get("my-prompt", { version: "1.0" });
OpenAI integration
-
Import: The import of the OpenAI integration is now:
import { observeOpenAI } from "@langfuse/openai"; -
You can set the
environmentandreleasenow via theLANGFUSE_TRACING_ENVIRONMENTandLANGFUSE_TRACING_RELEASEenvironment variables.
Vercel AI SDK
Works very similarly to v3, but replaces LangfuseExporter from langfuse-vercel with the regular LangfuseSpanProcessor from @langfuse/otel.
Please see full example on usage with the AI SDK for more details.
Please note that provided tool definitions to the LLM are now mapped to metadata.tools and no longer in input.tools. This is relevant in case you are running evaluations on your generations.
Langchain integration
-
Import: The import of the Langchain integration is now:
import { CallbackHandler } from "@langfuse/langchain"; -
You can set the
environmentandreleasenow via theLANGFUSE_TRACING_ENVIRONMENTandLANGFUSE_TRACING_RELEASEenvironment variables.
langfuseClient.getTraceUrl
-
method is now asynchronous and returns a promise
const traceUrl = await langfuseClient.getTraceUrl(traceId);
Scoring
-
Import: The import of the Langfuse client is now:
import { LangfuseClient } from "@langfuse/client"; -
Usage: The usage of the Langfuse client is now:
const langfuse = new LangfuseClient(); await langfuse.score.create({ traceId: "trace_id_here", name: "accuracy", value: 0.9, });
See custom scores documentation for new scoring methods.
Datasets
See datasets documentation for new dataset methods.