Marc
Marc4mo ago

Vercel AI SDK

Let’s take this to a thread >>>
9 Replies
Marc
Marc4mo ago
. What do traces for your current integration look like? Automated Tokens and costs require a model name that matches one of the known model names
z.o.rro
z.o.rro4mo ago
So it'll automatically calculate the tokens if the model name matches? Or do I have to pass the usage parameter while creating the generation? I use gpt-4o
Marc
Marc4mo ago
If the model matches, tokens and usd costs are inferred based on input and output
Marc
Marc4mo ago
Model Usage & Cost - Langfuse
Langfuse tracks usage and cost of LLM generations for various models (incl OpenAI, Anthropic, Google, and more). You can always add your own models.
z.o.rro
z.o.rro4mo ago
And does the model need to exist before the trace is sent to the langfuse? What if it is added later on, are the costs automatically updated for historical records?
const generation = trace.generation({
name: "msg-response",
model: "gpt-4o",
input: answer_prompt,
modelParameters: {
temperature: 0
}
});

generation.end({
output: responseMsg,
});
const generation = trace.generation({
name: "msg-response",
model: "gpt-4o",
input: answer_prompt,
modelParameters: {
temperature: 0
}
});

generation.end({
output: responseMsg,
});
Here is what my code for creating a generation looks like, is there something I missed out on setting?
Marc
Marc4mo ago
Needs to exist beforehand as this is handled at ingestion time (necessary for scalability of large instances) Looks good. Can you share a link to a trace here?
z.o.rro
z.o.rro4mo ago
Can't share the link, but can share a screenshot of what it looks like:
No description
z.o.rro
z.o.rro4mo ago
It's interesting how it's now correctly keeping track of the tokens and costs. Previously, I was utilizing an older version of langfuse, and I had personally added the gpt-4o definition based on the migration file found on Github. It appears that there was a problem with my manual addition, which led to the costs not being calculated correctly. PS: Will create a PR for docs update in some time. cheers! hmm, so if the cost of previously added models were incorrectly calculated because of incorrect model definition, there seems to be no way of recalculating the costs again. this can potentially be added once the background worker is added in v3 But again it's occurrence that it might even happen
Marc
Marc4mo ago
we will actually change this behavior which will make langfuse way faster in v3 but will make recalculating this more costly. we believe this is a good trade-off as analytics speed is more important in most cases does it work now on newly ingested traces? the trace screenshot looks good