David Alonso
David Alonso2mo ago

Traces take too long to show up

I'm trying to use langfuse to debug traces during development and I find myself waiting minutes to see traces appear. Is this expected? when using langsmith I can see stuff showing up almost instantly which is the experience I'm looking for.
3 Replies
Marc
Marc2mo ago
How do you log traces to Langfuse and what kind of jobs do you run (local batch, continuous in production)? The sdks log data to the api in batches but usually every second at least. Thus, you should see data in Langfuse almost instantly
David Alonso
David Alonso2mo ago
hmm okay. Doing something like this:
const sdk = new NodeSDK({
traceExporter: new LangfuseExporter({
debug: process.env.NODE_ENV === "development",
}),
instrumentations: [getNodeAutoInstrumentations()],
});

sdk.start();

const groq = createOpenAI({
baseURL: "https://api.groq.com/openai/v1",
apiKey: process.env.GROQ_API_KEY,
});

export const createBlockFromPromptVercel = authenticatedAction({
args: {
userPrompt: v.string(),
blockId: v.id("blocks"),
workspaceCollectionInfo: zodToConvex(zWorkspaceCollectionsInfo),
},
handler: async (ctx, args) => {
const result = await createBlockFromPromptAls.run(
{
ctx: ctx,
clerkOrgId: args.clerkOrgId,
blockId: args.blockId,
workspaceCollectionInfo: args.workspaceCollectionInfo,
},
() =>
generateText({
model: groq("llama-3.1-70b-versatile"),
maxToolRoundtrips: 5, // allow up to 5 tool roundtrips
experimental_telemetry: {
isEnabled: true,
functionId: "createBlockFromPrompt",
metadata: {
environment: process.env.NODE_ENV,
},
}, // langfuse telemetry
tools: {
getFields: FireviewAITools.getFields,
createTableBlockOneShot: FireviewAITools.createTableBlockOneShot,
},
})
);

// Save messages, etc

console.log(result.text);

await sdk.shutdown(); // Flushes the trace to Langfuse
},
});
const sdk = new NodeSDK({
traceExporter: new LangfuseExporter({
debug: process.env.NODE_ENV === "development",
}),
instrumentations: [getNodeAutoInstrumentations()],
});

sdk.start();

const groq = createOpenAI({
baseURL: "https://api.groq.com/openai/v1",
apiKey: process.env.GROQ_API_KEY,
});

export const createBlockFromPromptVercel = authenticatedAction({
args: {
userPrompt: v.string(),
blockId: v.id("blocks"),
workspaceCollectionInfo: zodToConvex(zWorkspaceCollectionsInfo),
},
handler: async (ctx, args) => {
const result = await createBlockFromPromptAls.run(
{
ctx: ctx,
clerkOrgId: args.clerkOrgId,
blockId: args.blockId,
workspaceCollectionInfo: args.workspaceCollectionInfo,
},
() =>
generateText({
model: groq("llama-3.1-70b-versatile"),
maxToolRoundtrips: 5, // allow up to 5 tool roundtrips
experimental_telemetry: {
isEnabled: true,
functionId: "createBlockFromPrompt",
metadata: {
environment: process.env.NODE_ENV,
},
}, // langfuse telemetry
tools: {
getFields: FireviewAITools.getFields,
createTableBlockOneShot: FireviewAITools.createTableBlockOneShot,
},
})
);

// Save messages, etc

console.log(result.text);

await sdk.shutdown(); // Flushes the trace to Langfuse
},
});
This is an example of the trace: https://cloud.langfuse.com/project/clzgvmdsq0016dca36phntcd5/traces/30186174433271e650b5b0473481da4e?observation=5da0dd0d2118b149 Not sure I understand the structure on the right sidebar, when I click into it it looks like there's duplication, unless the LLM is actually going through the flow twice..
Marc
Marc2mo ago
sdk shutdown should flush the traces to Langfuse. Is there a big delay after shutdown until you see them in the Langfuse ui?