Shezan Baig
Shezan Baig8mo ago

missing token counts

Hey @Marc - I have integrated langfuse with my llm run callback, I can see the traces getting logged on dashboard but not tokens data. Is there any extra process I need to do in order to get token details?
4 Replies
Marc
Marc8mo ago
happy to help, this can be due to a number of reasons, can you share more details on your setup?
Clemo
Clemo8mo ago
@Shezan Baig One reason might be that as of now, we only automatically calculate tokens for OpenAI and Claude models. Would be great to know how you're set up. If it's not one of these models, do you have the token counts available and could pass them to us?
Shezan Baig
Shezan Baig8mo ago
Hey @Marc / @Clemo - Thanks for the reply. So our setup is, we have trained the opensource models like mistral and llama2 and hosted these models using TGI which is huggingface's Text Generation Interface using a docker container. And I am hitting this TGI endpoint, so I need langfuse dashboard for these models. Also @Clemo - Sorry I dont have the token counts now but if you can tell what exact details you needs respect to tokens, I can have it available?
Marc
Marc8mo ago
Did you check out the docs here? https://langfuse.com/docs/model-usage-and-cost You can ingest token counts if available or request that the respective tokenizer is added to the langfuse package
Model Usage & Cost - Langfuse
Langfuse tracks usage and cost of LLM generations for various models (incl OpenAI, Anthropic, Google, and more). Add your own model definitions to track any model or custom pricing.