Langfuse

L

Langfuse

Langfuse is the open source LLM Engineering Platform. This is our Community Discord.

Join

support

self-host-support

get-support

feedback

feature-suggestion

self-host-discussion

announcements-releases

hi Marc, where can I get more info about

hi Marc, where can I get more info about the "license key" options for self-hosted? my understanding was the playground feature would be GA in self-hosted in v3 -- did that change?

Hey @Marc, I seem to be getting `io.

Hey @Marc, I seem to be getting io.chat for some of my traces, with no session, tokens, or cost. It seems to only be happening in my production instance, but I'm not sure why. Is it possible these "faulty" traces get created when I have an async flush timeout error? Because I have noticed that in my self-hosted logs occasionally.
No description

Langfuse Self Hosting Enterprise vs. FOSS

Hi. i am looking to understand difference between self-hosting (FOSS) and cloud serving (Team). The docs say that SOC2 is not available in self-hosting but why would one need SOC2 if the deployment is private on AWS. Also is it possible to use custom auth providers for 2FA when deploying via FOSS?...

Langfuse organization/project API

I am using the self-hosted langfuse deployed using Kubernetes. I wanted to automate the deployment of langfuse. This includes creating the user, creating the project, and creating the keys. ...

Is core for enterprise free?

I have a legal question about Langfuse being open-source, I played a bit with it locally on my machine to check it out, it’s really cool! What about usage for enterprise? If I want to use it for a research project at work, do I need an enterprise license? I don’t need the extra features, the core is good enough

Evals in Self-hosted

hello, will langfuse 3.0 integrate the evaluation tab on the self hosted version?

znigeln - Hi! I installed a test version of lan...

Hi! I installed a test version of langfuse in our environment, which was super simple and everyone loved it. I am now looking into making a more productionized deployment, but since V3 seems to be right around the corner I was thinking of waiting until then to avoid migration from V2. Is there any pre-release V3 version that is available for testing now or do we need to wait for the full release to get our hands on it?...

I am hosting on an ec2, and have an open

I am hosting on an ec2, and have an open port within my VPC to access langfuse. It looks like my connection is not secured because we aren’t using https. Is this something controlled by langfuse? Can I make langfuse’s web server use https?

anybody ever encounter: "migrate found

anybody ever encounter: "migrate found failed migrations in the target database, new migrations will not be applied. Read more about how to resolve migration issues in a production database: https://pris.ly/d/migrate-resolve" on cloud run/cloud sql?

Is it possible to enable the "beta"

Is it possible to enable the "beta" features in the self-hosted version?

I tried to use langfuse-k8s instructions

I tried to use langfuse-k8s instructions to install a local k8s cluster. I enabled port-forwarding (which wasn't in the instruction), but I still can't seem to connect to the service. Is there some instruction I missed?

Kubernetes Readiness probe

@Max can you confirm this is correct config for readinessProbe? ``` httpGet: path: /api/public/health port: 3000...

Langfuse v3 timeline

https://langfuse.com/docs/roadmap#%EF%B8%8F-upcoming-breaking-changes Is there any sort of timeframe for when those breaking changes (moving to a multi docker setup) is going to be? we have langfuse in prod at the moment but getting pretty bad latency for the dashboard/trace/session screens when we are under high load from users. Presuming this setup is for that usecase? we've been hesistant to upgrade or start customising with that on the horizon. Sorry i know timelines are hard and we are genuinely really happy with langfuse so far just this question mark we are wondering about...

Hi and good morning.

Hi and good morning. I have a strange issue. I want to add a healthcheck to my docker-compose setup. The docs say, that there is an /api/public/health endpoint that I can request. Thing is, that works when I check the endpoint from the outside (i.e. my host on which the docker container runs). But as soon as I try to request that uri from within the container (where docker compose runs the healthcheck), I get an error. I connect to /bin/sh in the container. default user is nextjs: ```...

I would love if there would be a way to

I would love if there would be a way to deploy Langfuse and have it ready without going into the UI to setup user/login, project, generate API keys. I mean something like if I provide LANGFUSE_DEFAULT_PROJECT_ID then it will create it if missing, and I should also provide the set of keyes for that default project.

I am only running one container.

I am only running one container.

AWS RDS IAM Connection

Hosting in AWS requires special compliance for credentials. I need to use IAM authentication to connect to RDS (an other services) -- is this immediately supported here? Or should I fork the source and implement it directly? https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.Connecting.html

Docker compose issues

Kinda? I'm not sure if that FATAL error is supposed to be there. The only thing I did differently was changing the directive port: 3030:3000 in the docker compose file to expose the service on the port 3030 as 3000 is already in use on my server. However, it seems to work, maybe that FATAL is not that... fatal 😄 I see (on the documentation) that the docker compose solution is not recommended for a number of reasons, I'd like to understand why I cannot start Langfuse in the traditional docker way :/...
No description

anyone running on gcp?

anyone running on gcp?
Next