support
self-host-support
get-support
feedback
feature-suggestion
self-host-discussion
announcements-releases
FlorDonnaSanders - I noticed support for mustac...
I noticed support for mustache variables detection in the langfuse UI is currently quite limited.
E.g. For a conditional variable statement like this
{{#some_variable}}It exists: {{.}}{{/some_variable}}
The variable some_variable
is not listed in the UI and not usable in the playground.
...hey any updates on the Full-Text Search
hey any updates on the Full-Text Search for the Inputs/Outputs?
FlorDonnaSanders - It would be nice to be able ...
It would be nice to be able to set score timestamp equal to trace timestamp (or any other) over the API for those of us running evaluations offline
https://github.com/orgs/langfuse/discussions/3194...
PP - One suggestion for enhancing the Dataset R...
One suggestion for enhancing the Dataset Run feature involves improving prompt performance analysis on test cases within datasets.
- It would be beneficial to have a way to easily filter and identify mistakes from previous dataset runs. For instance, if we're using an LLM for text classification, it would be valuable to know the specific types of errors the model is making. Currently, there is no filtering UI available for this purpose in the Dataset Run feature.
- Additionally, while scoring is currently possible at the tracing level, it would be highly advantageous to enable scoring at the dataset run level. For example, in a text classification task with multiple classes, having precision and recall metrics for each class would help pinpoint where errors are occurring. Achieving this level of insight is challenging when scoring is only supported at the tracing level....
cloakdrone - would love for a chart that shows ...
would love for a chart that shows tags consumption
coolrs - Any plan for langfuse to expose metric...
Any plan for langfuse to expose metrics in prometheus format ?
name
Hey! a very simple feature to implement but still useful (at least for me) would be a method "get_variables" for prompts, as well as a "raise_error" argument in the compile method, so if not all variables are filled, an error is raised (or at least a warning)
Aggregating user feedback
I’m leading the AI Assistant project for my employer and they’re already asking for a way to have basically summaries of the feedback being received from langfuse. Sounds like that’s something I can build on top of the API but I’d love to see that baked into langfuse itself.
OSS Observability for Instructor - Langf...
llmql should work via decorator, no dedicated integration planned yet, please add to langfuse.com/ideas to help track the overall interest for this
jxnl/instructor wraps the openai sdk, this it should be easy to monitor with the openai integration + decorator. See a very brief example here (not using the decorator though): https://langfuse.com/docs/integrations/instructor
PrefectHQ/marvin does not seem to support custom openai clients, I'd start with tracking it vai the decorator and please add to langfuse.com/ideas if you have an idea for a more native implementation...
Langfuse craps out when there is an
Langfuse craps out when there is an error with groq.RateLimitError: Error code: 429 - {'error': {'message': 'Rate limit reached for model
llama3-70b-8192
in organization org_BLAH
on tokens per minute (TPM): Limit 7000, Used 1607, Requested ~47333. Please try again in 5m59.484999999s. Visit https://console.groq.com/docs/rate-limits for more information.', 'type': 'tokens', 'code': 'rate_limit_exceeded'}}
Giving up execute_task_with_backoff(...) after 3 tries (langfuse.request.APIError: Invalid JSON (400): None)
Giving up execute_task_with_backoff(...) after 3 tries (langfuse.request.APIError: Invalid JSON (400): None)
Giving up execute_task_with_backoff(...) after 3 tries (langfuse.request.APIError: Invalid JSON (400): None)...Hey. Firstly thank you for all the work
Hey. Firstly thank you for all the work on Langfuse.
Is there a way when using LangFuse with LangChain that I would be able to exclude some or all of the inputs & outputs from a trace? I am dealing with sensitive data so don't want to be logging this in production. Looking through the docs, I'm not sure whether this feature exists?...
@Marc is there a way langfuse's
@Marc is there a way langfuse's
similar to
CallbackHandler
capture the necessary traces generated when using langgraph?similar to
langsmith
screen shots attached. both langfuse
& langsmith
capture lot of unwanted information with null values which ideally needs to be omitted...Prompt feature: I have translated my
Prompt feature: I have translated my prompts in +100 languages. The best way is to create a new prompt for each language, right? Like "my-prompt-name-de", "my-prompt-name-en"
Low priority request: When I open the
Low priority request: When I open the traces page it needs 28 seconds to load them. Would be nice if the loading is faster.
Flexible cost calculation
i understand you have in hosue cost calculations. But how about models you don't support? Where should I store the cost of generations once I calculate it? having cost in 2 different tables/places in the backend will make cost tracking over time very hard
OpenAI Assistants
Hi, I am an avid user of Langfuse and have greatly benefited from its features in debugging.
While with the recent launch of OpenAI's Beta Assistant API, I am curious to know if there is any Python SDK support (or in progress) available for it. I have checked both GitHub and Discord but haven't found any relevant information.
Thanks in advance and looking forward to utilizing it more extensively....
Multi-modal
I was wondering if there are any plans to support multi modal (images/videos) generative content? in addition, the content could be stored externally such as in s3
you can get a langchain handler also for
you can get a langchain handler also for a span: span.get_langchain_handler()