Langfuse

L

Langfuse

Langfuse is the open source LLM Engineering Platform. This is our Community Discord.

Join

support

self-host-support

get-support

feedback

feature-suggestion

self-host-discussion

announcements-releases

Prompt management + Langchain

HI there, we're trying the prompt management feature; is there a way to add the used prompt into the trace/session without manually call the generate method but using the handler?

Manual data seed

Hi folks! I am actually testing a feature in langfuse which requires creation of a few traces and scores on my locale. Is there any way I can manually add them so that they will be reflected on my dashboard?

LCEL Batch

Hi 🙂 I'm facing some weird behavior when using a LCEL chain with batch() instead of invoke ... is that something anyone or the team occured already? Basically I'm running the chain on multiple inputs for an eval run. ```...
No description

Langfuse host without http protocol

Just FYI for someone like me who would search here in the future: If you're getting the error below on local runs, make sure that your URL is set to http://localhost:3000, not localhost:3000 like, missing http protocol on it. Error message after enabling LangFuse debug mode:
Attempted to log "Langfuse Debug retry Retriable error: {"error":{"cause":{}},"name":"LangfuseFetchNetworkError"}, localhost:3000/api/public/spans, [...]
Attempted to log "Langfuse Debug retry Retriable error: {"error":{"cause":{}},"name":"LangfuseFetchNetworkError"}, localhost:3000/api/public/spans, [...]

GO support for Langfuse

Hi, I'm interested in using the Langfuse API to create traces and spans from my Golang backend but I noticed while reading the API reference (https://langfuse.com/docs/api-reference#tag/Trace) that there was not an endpoint for creating traces using the API. Could you please provide guidance or assistance on how I can effectively integrate tracing functionalities into my application using the Langfuse API? Any support or recommendations would be greatly appreciated. Thank you!

Debug Langchain integration

Hi all, I still havent managed to solve it. Any idea what could be wrong please? I can see in the console that llm is outputing (see below). I am running v1.24.2 I am putting the LangfuseCallback in every model I create, in every chain, but im getting no output. ```json...
No description

langfuse-langchain - npm Package File ex...

There is a regression between 2.1.0 and 2.2.0 in the langfuse-langchain package. If you go back to 2.1.0 it should work fine for now. 2.2.0 started bundling the types again: https://socket.dev/npm/package/langfuse-langchain/files/2.1.0/lib vs https://socket.dev/npm/package/langfuse-langchain/files/2.2.0/lib...

Hi all, I switched from langchain

Hi all, I switched from langchain RetrievalQAChain to a custom Runnable sequence and my tokens are no longer being tracked. Any hints what I need to do to make it work again?

Is it possible to search for number

Is it possible to search for number values (not strings) in my metadata?
No description

Zero latency when using .end()

Anything missing in my code which causes setting the completion time for "chat-copmletion" generations to -0.00s?
No description

Custom model costs

Yes, it is possible to provide the costs per tokens for Langfuse to calculate the costs for generation with a custom model. However, the documentation provided does not contain any information about how to do that. [...] If you have any further questions or need assistance with integrating cost calculation into Langfuse, I recommend reaching out to the founders directly for guidance. They will be able to provide the most accurate and up-to-date information.
The chat bot says I should reach out to you 👀 I'm using for one generation the Mixtral-8x7B-Instruct-v0.1 model (https://deepinfra.com/mistralai/Mixtral-8x7B-Instruct-v0.1). Is there an option in Langfuse to specify what the costs for the model are?...

Client side exceptions in dashboard

nope, I don't see any error in the logs. The other screens like traces and sessions are working fine

userid in callback handlerH

Hi, Is-it possible to pass the user_id to a CallbackHandlerobject ?...

json-l for assistants

hey! i reported this via crisp but didnt get a response... maybe this is a better place to reach out? jsonl export didnt include the last assistant response for me. did anyone else run into the issue?...

Langchain, Azure OpenAI SDK changes

Hi Langfuse team & everyone, I apologise if this question doesn't make any sense but ive just been stuck on it for quite a while now. I've created a Langchain SQL Agent a while ago and use langfuse to monitor it. I use Azure for this and recently azure has changed the way its connection works. Ever since then, whenever my SQL Agent gets to the "Thought" phase i now get: ```Thought:ERROR:langfuse:'engine' Traceback (most recent call last): File "/workspaces/chat/.venv/lib/python3.11/site-packages/langfuse/callback.py", line 498, in __on_llm_action...

Langchain Output Parsers

hello! I'm experimenting a bit the integration with langchain and I saw that it does not generate traces for output parsers. Do you have plans to implement that?
No description

Nested generations / spans in traces

Hi everyone! I'm trying to use the trace and observation IDs to create a nested set of observations instead of using the objects in the Python SDK. I was able to use langfuse.generation(CreateGeneration(traceId=trace.id)) instead of trace.generation(CreateGeneration()) to create a generation inside a trace but I'm having trouble creating observations under another observation. I tried the following which didn't work. I was expecting the following code to create, a span inside that trace, and a generation inside the span. But instead it creates a span and a generation inside the trace. Would appreciate some guidance on this. ``` langfuse = Langfuse( public_key=Config.fetch("langfuse-public-key"),...

Get tokens by users

Hi all, is it possible to get the token usage of my users via the typescript SDK or http API inside my app?

Thanks, that would be great. For now, I

Thanks, that would be great. For now, I might be able to fork langserv and add the langfuse CallbackHandler.

Missing observations

i keep running the same script and 50% times it misses spans/generations