support
self-host-support
get-support
feedback
feature-suggestion
self-host-discussion
announcements-releases
which version are you using? run_id not
which version are you using? run_id not found is fixed last week
My local instance of langfuse is showing
My local instance of langfuse is showing null for input, output, and metadata on the top trace. If I dig down, I see the outputs showing. I'm using a simple LangChain implementation to run an LLM (code in thread). Am I missing something?
I'm having issues with the `api/public/
UPDATE: Issue resolved. Didn't run the DB migrations.
I'm having issues with the
api/public/dataset-run-items
endpoint not returning trace_ids. DataRunItem
requires a trace_id, but they don't appear in the response. Stacktrace in thread. Any ideas on how to fix this?...n8n
Hello,
I'm discovering Langfuse, and I'd like to know if I can use it with n8n (as an http node). n8n will then handle the calls to the models , then responses would be redirected to langfuse for evaluating ?...
Seeing a lot of httpx timeouts when
Seeing a lot of httpx timeouts when calling
langfuse_callback_handler.flush()
with llama indexView traces
Is there a way to view all traces in a project? It appears that you have to select specific users to actually see traces.
Dev/staging/prod
Hello team! I just wanted to double check about best practice for separating local/dev/prod environments in Langfuse. As far as understand, the recommended approach is to use tags for that. Would creating a separate project make sense too, would like to understand some pros and cons regarding this approach
DB connections aren’t reused when self hosting Langfuse
on further inspection, this sharp increase of db connections from langfuse instance has occured in the past too. Only now have we realised the issue due to error logs
hello, I have an application in which I
hello, I have an application in which I am building a base class for a task, this application will relate to an llm and can use both langchain and llamaindex, and I would like to have a lot of control over the langfuse traces so I need to go down the level. For example, a problem would be the langchain chains in which they concentrate a runnablesequence and the invoke triggers this sequence, but how would I treat the spans in this case since the processes occur hidden? Is the best way to not use...
Python sdk issue
Hello, I require assistance folk, I encountered an error this morning.: <ipython-input-5-c6633758d278> in get_traces(name, limit, user_id, order_by)
4
5 while True:
----> 6 response = langfuse.client.trace.list(
7 name=name, page=page, user_id=user_id, order_by=order_by...
the following is the logs when debug=
the following is the logs when debug=True is added during the langfuse object instantiation in the code snippet
Scores still are not received by the langfuse server.
`Evaluating: 100%|██████████| 2/2 [00:09<00:00, 4.69s/it]240a5f73e468d5407fd)
[2024-02-12T15:10:05+05:30] (ecs/nlq-worker/15e524848f864240a5f73e468d5407fd) DEBUG:langfuse:Creating score {'id': '5c8c6162-4e07-4b9a-b12c-0f632d93fc1f', 'trace_id': '8885ab98-9d0e-446e-9382-7e7ee23c697d', 'observation_id': None, 'name': 'faithfulness', 'value': 1.0, 'comment': None}......
Hi Denis, currently redaction does not
Hi Denis, currently redaction does not work well if you aim for the automated usage/cost analysis in Langfuse. Do you have access to the usage unit (eg tokens)? Then it'd be easy to still retain price calculations in Langfuse while redacting some of the input/output. Especially enteprise teams do this frequently for production environments
Hey all! I'm using Langfuse with LiteLLM
Hey all! I'm using Langfuse with LiteLLM (using success_callback set to
langfuse
) and I am getting the following error: langfuse.request.APIError: {'message': 'Invalid public key'} (401): None
Does anyone know why this is happening?...Any pointers as to what could cause this
Any pointers as to what could cause this odd behaviour? (it's happening when I point to the hosted solution or self hosted)
Good question, haven't used the flowise
Good question, haven't used the flowise integration myself in prod though. What you could do is to fetch the session via the GET API which returns the traceids that belong to the session
LCEL streaming
Hi all, I'm using langchain LCEL + stream, so chain.stream...
But now the inputs at on_chain_start are initially empty {inputs:''}
```FIRST: -------> {'input': ''}
FIRST: -------> {'input': ''}
FIRST: -------> {'input': ''}...
Hey folks, do we have some restrictions
Hey folks, do we have some restrictions on what kind of input/outputs a "chain" can have in LangChain while integrating with Langfuse?
I was trying Langfuse, but got an error
where,
```...
Exception in adding task Object of type AnswerChainInput is not JSON serializable
Exception in adding task Object of type AnswerChainInput is not JSON serializable
Scores & Evaluation - Langfuse
Hi, I have a query regarding scores related to "similarity to prompt injections" and "refusals" as mentioned in https://langfuse.com/docs/scores under "Kinds of scores". However none of the attributes from langchain eval or ragas attributes address these two. Have any of you implemented scores to address these aspects?
Different traces
HI! I am trying to use feedback in the same trace that langchain callback handler creates. I followed https://langfuse.com/docs/langchain/python#adding-scores and ran into a problem: It generates two empty extra traces as well as it puts the feedback into another, fourth one. I will also provide all relevant code in the thread