Langfuse

L

Langfuse

Langfuse is the open source LLM Engineering Platform. This is our Community Discord.

Join

support

self-host-support

get-support

feedback

feature-suggestion

self-host-discussion

announcements-releases

Vercel AI SDK

Let’s take this to a thread >>>

Hey

Hey I am using langfuse with langchain integrations. I have a prompt stored in the langfuse. It generates the JSON and I have the format of the JSON as examples in the prompt. ...

has anyone ever ocured

has anyone ever ocured httpcore.UnsupportedProtocol: Request URL is missing an 'http://' or 'https://' protocol. ? suddenly got that error, the version of httpcode i've been using is 1.0.4...

Hi all, I'm just looking into langfuse

Hi all, I'm just looking into langfuse for the first time and hopefully this somewhat specific question can be an easy fix. I am using node and the openai sdk. I use beta assistant features so my code looks more like this `` await openai.beta.threads.messages.create(thread.id, { role: "assistant", content: new message thread`,...

Is anyone else getting this error? (This

Is anyone else getting this error? (This is when I am trying to use the dataset to evaluate and score pretty much as what the dataset notebook example has). I added a comment to an existing bug, but I'm not sure if that's the same issue, so I will also share here: ``` .../eval.py Traceback (most recent call last): File ".../eval.py", line 191, in <module>...

I have some flows on Flowise that I am

I have some flows on Flowise that I am monitoring on Langfuse (and langsmith) and I'm trying to track the token costs. As far as I can tell whether I'm working with Open AI or Anthropic the token usage I'm seeing in Langfuse is at least half of what I'm being charged in Open AI or Anthropic dashboards. Am I going insane or is there likely something obvious I would be missing? I've tested this so far with assistants (this seems very problematic for costing anyway), agent flows and for normal conversation chains. ...

Demo of JS SDK Issue

@hassiebp Hey just wanted to record a loom video to show that the js sdk doesn't seem to be captuing traces, while the python one is. https://www.loom.com/share/984a0aa15e1647359e2e8e80ca39e8f3?sid=fff3abf7-9f98-4837-9003-a099a4897671

this is interesting and not something we

this is interesting and not something we usually observe, do you use a custom trace id?

Decorator-based Python Integration - Lan...

Hi Denis! This is a current limitation of observe() decorator + ThreadExecutor. We are waiting for an upstream fix, find more details on why this is an issue and a workaround here: https://langfuse.com/docs/sdk/python/decorators#using-threadpoolexecutors-or-processpoolexecutors

Yes I have tried that.

Yes I have tried that.

Did you have a look at the generations

Did you have a look at the generations within your traces? Can you share an example trace that lacks this information?

Google Cloud Vertex AI | 🦜🔗 LangChain

Yes they provide token counts but looks like only if using the Langchain generate() method to call. See: https://python.langchain.com/docs/integrations/llms/google_vertex_ai_palm/ GenerationChunk can return the following
'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 647, 'total_token_count': 662}})
'usage_metadata': {'prompt_token_count': 15, 'candidates_token_count': 647, 'total_token_count': 662}})
...

hai I'm building langchain api and try

hai I'm building langchain api and try use langfuse to trace my chain but why is it only working with invoke method but not the batch method here how I define the callback to integrate with langfuse?? `langfuse_config = RunnableConfig(callbacks=[langfuse_handler] async def batch_api(api_chain, path: str, request: Request) -> Response:...

I have had langfuse deployed and self-

I have had langfuse deployed and self-hosted for about 2 months now, and it has worked smoothly. Today, I started seeing a strange timeout error -- I wouldn't expect it to be through our deployment infra, but I wanted to drop here to see if you had any recommendations (see thread)

Trying to debug an issue on my tracing

Trying to debug an issue on my tracing self-hosted dash. The Total Cost on the Trace List page shows 0 while showing up correctly on Trace Detail. Example of fetched trace from the list page is returning: ``` ... calculatedInputCost: "0.0661" calculatedOutputCost: "0.01014"...

FYI, I am getting wonderful tracing for

FYI, I am getting wonderful tracing for the whole program by using this code... but I need to break out each input into its own trace: ``` @observe() def parent_function():...

I'm also seeing this error:

I'm also seeing this error: ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/site-packages/langfuse/callback/langchain.py", line 627, in _parse_model_and_log_errors model_name = _extract_model_name(serialized, **kwargs)...

When I attempt to get a prompt from

FIXED: Force reinstalling langfuse's python package seemed to do the trick When I attempt to get a prompt from Langfuse, I'm getting the following error: ```Error while fetching prompt 'PROMPT_NAME-latest': 'Prompt_Chat' object has no attribute 'type'...

Hi here. I'm trying to use langfuse with

Hi here. I'm trying to use langfuse with llamaindex but keep having this error as soon as I use parallelism (whereas all is good if I just have my query pipeline once at a time): ``` server | An error occurred in _handle_span_events: not enough values to unpack (expected 2, got 1) server | Traceback (most recent call last): server | File "/usr/local/lib/python3.11/site-packages/langfuse/utils/error_logging.py", line 14, in wrapper...

which version are you using? run_id not

which version are you using? run_id not found is fixed last week