Langfuse

L

Langfuse

Langfuse is the open source LLM Engineering Platform. This is our Community Discord.

Join

support

self-host-support

get-support

feedback

feature-suggestion

self-host-discussion

announcements-releases

LangChain+Ollama+Langfuse Not Recording Token Usage

Hello, I am using ChatOllama from LangChain to send LLM requests to local Ollama server. I am also integrating LangFuse with LangChain to trace the requests. The generation requests are being successfully traced, including the input and output of the model. However, the token usage is always 0. I attached a screenshot of one trace showing zero token usage. I checked the output of LangChain's invoke method and the usage data is in the response. It's accessible via response.usage_metadata["input_tokens"] and response.usage_metadata["output_tokens"]. I also tried langfuse_context.update_current_observation(usage={"input": response.usage_metadata["input_tokens"], "unit": "TOKENS"}) but it still shows zero....
Solution:
thanks for reporting this, can you open an issue on github for this? https://langfuse.com/issues
No description

null value traces

Hello, I have a problem with null traces (input, output) I’ve tried trace/trace.end but it says there is no end method in trace. I’ve changed it to trace.update but now I do not have traces names on dashboard...
Solution:
Traces do not have an end method: https://langfuse.com/docs/sdk/python/low-level-sdk

Self-hosted models via API

Hello, could you please tell how do I evaluate models which are self-hosted via API?
Solution:
You can use any model with Langfuse. You can either log the usage via the low level sdks or eg via the Python decorator. Find an example here: https://langfuse.com/docs/sdk/python/decorators#log-any-llm-call

Introducing the new forum channel for support via Discord

Hi everyone, We replaced the previous #get-support channel with this forum channel to: 1. Better monitor which questions are still open...
No description