Hi Denis, currently redaction does not
Hi Denis, currently redaction does not work well if you aim for the automated usage/cost analysis in Langfuse. Do you have access to the usage unit (eg tokens)? Then it'd be easy to still retain price calculations in Langfuse while redacting some of the input/output. Especially enteprise teams do this frequently for production environments
4 Replies
what do you mean by the "access to the usage unit"? If I would self-host Langfuse, then we could probably manually edit entries in the db? Is this what you mean?
Model Usage & Cost - Langfuse
Langfuse tracks usage and cost of LLM generations for various models (incl OpenAI, Anthropic, Google, and more). Add your own model definitions to track any model or custom pricing.
"access to the usage unit" = does your application receive some form of information on token usage (in case you use an LLM API). If yes, you could add this to the generation you ingest into Langfuse (see docs). If you ingest token usage, you can redact from input/output without affecting cost
I see! Well, I actually use
OpenAI Integreation
with python with drop-in replacement of openai. We stream openai calls, so i guess using generation without some manual storing of intermediary results would be hard.
Now, I actually would like to keep using the integration with OpenAI and post-process some of the generations. That looks like the least intrusive and easiest option
wdyt?