David Alonso
David Alonso5w ago

Does Langfuse require being run in a Node environment?

I'm trying to run it inside a Convex runtime (as described here) and running into some issues. This runtime is similar to Vercel's edge runtime. Vercel's AI SDK runs in the edge runtime, so Langfuse deps like open-telemetry are the only reason I am currently using Node
Runtimes | Convex Developer Hub
Convex functions can run in two runtimes:
8 Replies
David Alonso
David Alonso5w ago
GitHub
GitHub - evanderkoogh/otel-cf-workers: An OpenTelemetry compatible ...
An OpenTelemetry compatible library for instrumenting and exporting traces for Cloudflare Workers - evanderkoogh/otel-cf-workers
Marc
Marc5w ago
Which issue do you run into? can you open an issue on github? We have tested the vercel ai sdk integration on vercel (which default to edge functions when using the app router in nextjs afaik) which should be a similar to the cloudflare runtime thanks for reporting this by the way!
David Alonso
David Alonso5w ago
The issue I'm having is that I want to run Vercel AI + Langfuse inside a Convex action (see link I shared in the original post), so when I visited these docs: https://langfuse.com/docs/integrations/vercel-ai-sdk I leaned towards the Nodejs guide, but that comes with some performance losses on the Convex side, so ideally I'd use the nextjs guide but I'm not sure how to get that to work with Convex, specifically the next.config.js part feels like it wouldn't work well
Vercel AI SDK - Observability & Analytics - Langfuse
Open source observability for Vercel AI SDK using its native OpenTelemetry support.
David Alonso
David Alonso4w ago
any thoughts @Marc ? fyi as much as we like langfuse this is the main reason for us to consider something like helicone which we know easily works in the edge runtime of convex
Marc
Marc4w ago
Is there general guidance on how OTel works with convex? Nothing about this should be Langfuse specific You can also use a proxy-based implementation similar to helicone if you only want to track llm calls. I’d recommend to use litellm which is natively integrated with Langfuse as well and allows you to use 100s of LLMs via the OpenAI api schema
Marc
Marc4w ago
Observability for LiteLLM - Langfuse
Open source observability for LiteLLM via the native integration. Automatically capture detailed traces and metrics for every request.
David Alonso
David Alonso4w ago
I'll try to dig a bit, though my question is more around the fact that the guide that uses vercel/otel is in the nextjs section when in my case the llm call doesn't run on a nextjs server so I'm not sure how to get around it
Marc
Marc4w ago
Makes sense, unfortunately no personal experience with convex but I’d assume that there is a standard way of using OTel with Convex