null value traces
Hello,
I have a problem with null traces (input, output) I’ve tried trace/trace.end but it says there is no end method in trace. I’ve changed it to trace.update but now I do not have traces names on dashboard
Solution:Jump to solution
Traces do not have an end method: https://langfuse.com/docs/sdk/python/low-level-sdk
Python SDK (Low-level) - Langfuse
Fully async and typed Python SDK. Uses Pydantic objects for data verification.
12 Replies
Can you share how you implemented this? are you on the latest sdk version?
@Marc def run_my_custom_llm_app(input, system_prompt):
print(input)
messages = [
{"role":"system", "content": system_prompt},
{"role":"user", "content": input["question"]}
]
trace = langfuse.trace(input=input)
generationStartTime = datetime.now() openai_completion = openai.chat.completions.create( model="gpt-3.5-turbo", messages=messages ).choices[0].message.content langfuse_generation = trace.generation( name="bm4", input=messages, output=openai_completion, model="gpt-3.5-turbo", start_time=generationStartTime, end_time=datetime.now() ) print(trace) trace.update(output=openai_completion) return openai_completion, trace I've used trace.update as trace.end did not work langfuse-2.43.1 Nulls on input and output are fixed but trace name disappeared is there any documentation on this method?
generationStartTime = datetime.now() openai_completion = openai.chat.completions.create( model="gpt-3.5-turbo", messages=messages ).choices[0].message.content langfuse_generation = trace.generation( name="bm4", input=messages, output=openai_completion, model="gpt-3.5-turbo", start_time=generationStartTime, end_time=datetime.now() ) print(trace) trace.update(output=openai_completion) return openai_completion, trace I've used trace.update as trace.end did not work langfuse-2.43.1 Nulls on input and output are fixed but trace name disappeared is there any documentation on this method?
Solution
Traces do not have an end method: https://langfuse.com/docs/sdk/python/low-level-sdk
Python SDK (Low-level) - Langfuse
Fully async and typed Python SDK. Uses Pydantic objects for data verification.
trace = langfuse.trace(input=input, name=name)
alternatively, I'd recommend the decorator, here is an example of decorator + openai integration: https://langfuse.com/docs/sdk/python/decorators#openai
Decorator-based Python Integration - Langfuse
A decorator-based integration to give you powerful tracing, evals, and analytics for your LLM application
Do I need to use trace.update or something similar at all?
I think we discussed trace.end on the call, might be I misunderstood
i'd use trace.update to add the output
might have been an error on my end, sorry about this
So my current code looks correct?
yes, you are just missing the name
Isn’t it a name?
Meaning, do I need to specify it somewhere else also?
@Marc
yes, you can add it on the trace as well