Langfuse

L

Langfuse

Langfuse is the open source LLM Engineering Platform. This is our Community Discord.

Join

support

self-host-support

get-support

feedback

feature-suggestion

self-host-discussion

announcements-releases

Turn on and off langfuse callaback in langchain

How can I turn on and off Langfuse Callaback and tracing based on true and false flag?

How to use langfuse with LangGraph Studio

We are using LangGraph studiio with langgraph js. Currently in order to use langfuse with LangGraph studio we have to add a dummy callback wherever it's possible. Is there any possible way yo provide it once and capture the whole trace?

not able to calculate the custom model cost

can someone please help me on this thread: https://github.com/orgs/langfuse/discussions/3313, I am not able to calculate the custom model cost
Solution:
See my response on GitHub, hope this helps

What is the best was of seperating dev, staging and prod environments?

We want our devs to be able to use langfuse (preferably locally), have traces from the staging env as well as prod env. What is the best way to separate and preferably isolate traces from each env such that we can only view traces from 1 specific env at a time. Also since we have integrated with posthog we don't want dev or staging data in posthog.
Solution:
I’d recommend to separate envs into projects then

LangchainCallbackHandler custom input/output?

Is there a way to set/override input/output when using the LangchainCallbackHandler similar to how the @observe() decorator allows updating observation context with custom inputs/outputs?
Solution:
Overriding single observation/span/generation outputs is tricky. You could fetch the trace via the fetch_* methods to then update it. If you want to change the trace output you could wrap this with the decorator and handle the trace yourself. Details: https://langfuse.com/docs/integrations/langchain/tracing#interoperability What do you try to achieve here? Maybe I can help in a better way with more context...

Can't update trace between lambdas using the custom trace Id.

Hello! I want to update a trace with information from 2 different lambdas but can't seem to get it right. Here is the context: I am working in a chatbot using AWS Lambdas. I create a trace with a custom trace_id in a lambda_function_1 and use it to register some calls to the OpenAI API. This works fine. Then I generate a response and send the response with the custom trace id to lambda_function_2 which handles the logic to send the response back to the user. After the response is sent, I get an response_id. I want to add the response_id to the trace created in the lambda_function_1 by metadata. I am doing this by updating the trace with its custom trace id I got from lambda_function_1: ...

langfuse-k8s example

Hello! I'm looking at https://github.com/langfuse/langfuse-k8s?tab=readme-ov-file and the example helm chart provided in the repo here. From the README it looks like we can deploy a PG instance by setting postgresql.deploy to true and I was wondering how this maps back to the example provided Thank you! 😄...
Solution:
Got it working! I didn’t realize how Helm installed the resource directly into GCP. I was under the impression that there would be an example K8s manifest for the PG deployment as well. I ended up looking and the helm deployed serviced and used those .yaml values to update my deployment that I created from the /examples folder...

The name of the used model

I'm using a two openai models for multy-agent system, fine tuned one and gpt-4o-mini. Why do I see that my fine tuned model is named gpt-4o-mini-2024-07-18 and not by its full name. gpt-4o-mini-2024-07-18 is its base model, but I don't understand if my fine tuned model is called or its base model?...

Vercel AI SDK with Svelte

Hi, I am using vercel AI with svelte.. To configure I have used NodeSDK and create a instrumentation.ts file export const sdk = new NodeSDK({ traceExporter: new LangfuseExporter({...

How to save only metadata

Hi, is it possible to only store metadata but not content of the messages in langchain integration? Context: we still care about users' production metadata but cannot inspect the content of actual messages
Solution:
sure; for future reference https://github.com/orgs/langfuse/discussions/3181 marking this Q as solved, thank you for your response...

Generation traces containing tool calls don't get carried over to "Test in playground"

We have some traces for calls to OpenAI involving tool use. We'd like to be able to iterate on the prompts via Test in playground, but the whole tool calls portion of our input payload don't get carried over to playground. Any suggestions for how we can make this use case work?...

generation renaming

How can I name each generation when using langgraph agents

Setting trace ID and parent observation ID with Python decorator SDK

I love the simplicity of Python Decorator SDK and would like to incorporate distributed tracing with it. In our setup, we have LLM apps that would call Tools, with both types of microservices linked to the same Langfuse project. LLM apps would pass the current Langfuse trace ID and observation ID to Tools when the LLM app decide to call tools. Currently we are able to set custom trace ID by passing langfuse_observation_id kwarg to the first method that is wrapped by @observe. However, we can't do the same as we also want to set parent_observation_id. Although it is possible to set it through the low-level SDK, seems that if we choose to use decorators we just can't achieve the same. ...
Solution:
Thanks for raising this, can you please open an issue for this? We are happy to have a look into this to extend the functionality here: https://langfuse.com/issues

Help to Retrieve Specific Prompt

Hello. I'm new to development and Langfuse. I'm trying to access a specific prompt from my AppSmith account using the REST API. When I use this- https://cloud.langfuse.com/api/public/v2/prompts with "GET", I can retrieve a list of prompts successfully. But when I try to retrieve a specific prompt like this-...
Solution:
Hi @zacky8653, do you have a prompt version with the producation label applied to it for this prompt name?

tags and metadata not visible in traces in langchain(crewai)

Hi everyone, I am using langfuse for monitoring my crewai agent calls. I am able to generate the traces using decorators, passing the tags with langfuse_context and passing that in callbacks. Below is my code example ```from langfuse.decorators import langfuse_context, observe @observe()...

Get runs results

Hi! Is it possible to get specific run's results? When I am using langfuse_client.get_dataset_run endpoint it returns the correct run, but it lacks the info about e.g. avg latency or cost. Is it possible to get it from the API?

downloading results

Is it possible to download all experiment results along with associated input dataset items?

langfuse capabilities

How much traces can langfuse handle and still working on it properly?
Solution:
Langfuse works with hundreds of millions of traces, tens of millions are fast across the ui. We are currently making many changes in v3 to increase scalability a lot (especially fast analytical reads for the dashboards)

Langfuse vs Helicone

The helicone integration seems to be a lot simpler and lightweight but I'm wondering if there's any downsides I'm not aware of?
Solution:
Proxy implementations make it very simple to trace llm calls whereas Langfuse traces asynchronously on the application level. Upside proxy: change basepath and done Upside application-level:...
Next