Langfuse

L

Langfuse

Langfuse is the open source LLM Engineering Platform. This is our Community Discord.

Join

support

self-host-support

get-support

feedback

feature-suggestion

self-host-discussion

announcements-releases

Zet - Move the Add to dataset button to the top...

Move the Add to dataset button to the top of the modal in case of long input so we don't have to scroll to the bottom

alexrosen - This seems silly, but it kind of bu...

This seems silly, but it kind of bugs me to be on the "Hobby" plan. It doesn't bug me in a way that makes me want to upgrade. It's more of a "They're telling me I'm not serious" way. The nature of my work may keep me on the free plan forever but having "Hobby" persistently show in the UI next to my org name is off putting. If/when I upgrade it will not be because I went from being a hobbyist to a pro. It will be because the solution has proven its value and my project requires services that are not available on the free plan. My suggestion is that when you're bored with the difficult work you need to do to get to 3.0, take a minute and replace "Hobby" with "Starter" or "Free" (e.g., AWS "Free Tier", Posthog "Totally free" plan). It's a low priority and not a big deal, but I thought you'd want to know, and I want you to be as successful as possible....

alexrosen - In Prompt Management, Text prompt i...

In Prompt Management, Text prompt is the primary/default option for new prompts. That made me wonder if I should be using text prompt instead of chat prompt. After consulting the OpenAI docs, I confirmed that chat prompt is the way to go (at least for them). My suggestion is to make Chat the default prompt type when adding new prompts, especially if that is what is most used now....

alexrosen - The example on Prompt Management->G...

The example on Prompt Management->Get Started shows getting a compiled chat prompt, but it doesn't show how that gets used with the client to get a completion. Is there a messages property on the compiled prompt? My suggestion is to add an example on Get Started that shows a chat prompt being compiled and then used to get a completion....

alexrosen - After I got tracing working, I got ...

After I got tracing working, I got interested in doing more with langfuse, particularly using your Prompt Management capability for a new prompt I need to add to my application. I am going to create a simple function that combines get_prompt and compile. I was a bit frustrated that I could not find a doc describing the compile method. I searched the docs and the reference and did not find anything. I wanted a doc or example to see how to use compile with more than one variable being substituted. I want my function to take a dictionary of variables to be used in substitutions in compile, but wasn't sure if I could just add a dictionary to compile. I think I can, but a reference would make it so I'm confident vs. doing trial and error. My suggestion is to add an example of compile with more than one variable being substituted and to either add compile to the reference or make it show up in search....

alexrosen - I got setup a week or two ago for t...

I got setup a week or two ago for tracing, but it took me some extra time because I use Azure OpenAI. A search for Azure gets you to a page that is about langchain on Azure. I needed to get to this page to see the alternate imports: https://langfuse.com/docs/integrations/openai/python/get-started. I then needed to decipher what you meant by alternate imports. Maybe I'm dumb. My suggestion is to get those alternate imports, and an explanation, onto the Tracing quickstart page. That would have saved me 30(?) minutes of hunting around and troubleshooting errors. In the end, the value I got from tracing was enough to leave me happy....

ASP - do we have support of langfuse with auto...

do we have support of langfuse with autogen ?

ASP - can someone help me with an example how t...

can someone help me with an example how to capture the cost in langfuse UI for every conversation id

PP - Hi Langfuse team, I've been enjoying Langf...

Hi Langfuse team, I've been enjoying Langfuse so far—it's been a great tool! I wanted to suggest a potential improvement for the trace feature. It would be incredibly useful to have the ability to filter traces by input or output. For example, if we receive feedback from users about an LLM app, such as instances of hallucination, we could easily search for the specific input that triggered the issue, helping us identify and address the problem more efficiently.
No description

Rounding on pricing

custom model pricing precision seems not enough, for example mistral on amazon bedrock is $0.00045/1k token, but got rounded to $0.0004/1k token on langfuse
No description

Hi there, I think there is a tiny

Hi there, I think there is a tiny problem with the types defined in the CallbackHandler constructor (see attached screenshot). If the root parameter is specified, it must be different than undefined. However, the current configuration of types allows for the following situation:
new CallbackHandler({root: undefined, enabled: true});
new CallbackHandler({root: undefined, enabled: true});
...
No description

Scope of api keys

Hey Team, I'm reaching out to gain a better understanding of the rationale behind implementing API keys at the project level rather than at the application level. Could you share some insights into the decision-making process for this approach?...

missing token counts

Hey @Marc - I have integrated langfuse with my llm run callback, I can see the traces getting logged on dashboard but not tokens data. Is there any extra process I need to do in order to get token details?

Hey Team, I have a suggestion for the

Hey Team, I have a suggestion for the new prompt management system: In my use-case, I need langchain prompt templates, so it would be awesome if there was an option to directly return the template in langchain format. Of course one can do this manually, but it is extra work that probably a lot of people are doing:)

debug callback handler

Hi @Marc , i'm using langfuse callback handler in my project in an LLM chain. I notice, different llm calls data logged under same trace.
No description

Openai integration envs

hey langfuse team, I just want to ask if you could help to upgrade the implementation of the Python OpenAI API import logic? Now the system variables should be defined before the import: ``` os.environ["LANGFUSE_PUBLIC_KEY"] = "" os.environ["LANGFUSE_SECRET_KEY"] = "" from langfuse.openai import openai...

Langchain Integration [Huggingface]

The first 10 minutes test works, that is nice. I have a small sample using langchain to connect to OpenAI and Huggingface. It seems the HuggingFace version does not work completely. I get and error and the Generation is not logged like with OpenAI. This is the error: ``` ERROR:root:'model_name'...