support
self-host-support
get-support
feedback
feature-suggestion
self-host-discussion
announcements-releases
Turn on and off langfuse callaback in langchain
How to use langfuse with LangGraph Studio
not able to calculate the custom model cost
What is the best was of seperating dev, staging and prod environments?
LangchainCallbackHandler custom input/output?
Can't update trace between lambdas using the custom trace Id.
langfuse-k8s example
postgresql.deploy
to true
and I was wondering how this maps back to the example provided
Thank you! 😄...The name of the used model
Vercel AI SDK with Svelte
How to save only metadata
Generation traces containing tool calls don't get carried over to "Test in playground"
Test in playground
, but the whole tool calls portion of our input payload don't get carried over to playground.
Any suggestions for how we can make this use case work?...generation renaming
Setting trace ID and parent observation ID with Python decorator SDK
langfuse_observation_id
kwarg to the first method that is wrapped by @observe
. However, we can't do the same as we also want to set parent_observation_id
. Although it is possible to set it through the low-level SDK, seems that if we choose to use decorators we just can't achieve the same. ...Help to Retrieve Specific Prompt
https://cloud.langfuse.com/api/public/v2/prompts
with "GET", I can retrieve a list of prompts successfully. But when I try to retrieve a specific prompt like this-...tags and metadata not visible in traces in langchain(crewai)
Get runs results
langfuse_client.get_dataset_run
endpoint it returns the correct run, but it lacks the info about e.g. avg latency or cost. Is it possible to get it from the API?downloading results
langfuse capabilities
Langfuse vs Helicone