faileon
faileon9mo ago

Debug Langchain integration

Hi all, I still havent managed to solve it. Any idea what could be wrong please? I can see in the console that llm is outputing (see below). I am running v1.24.2 I am putting the LangfuseCallback in every model I create, in every chain, but im getting no output.
[llm/end] [1:llm:ChatOpenAI] [910ms] Exiting LLM run with output: {
"generations": [
[
{
"text": "No",
"generationInfo": {
"prompt": 0,
"completion": 0
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "No",
"additional_kwargs": {}
}
}
}
]
],
"llmOutput": {
"estimatedTokenUsage": {
"promptTokens": 2122,
"completionTokens": 1,
"totalTokens": 2123
}
}
}
[llm/end] [1:llm:ChatOpenAI] [910ms] Exiting LLM run with output: {
"generations": [
[
{
"text": "No",
"generationInfo": {
"prompt": 0,
"completion": 0
},
"message": {
"lc": 1,
"type": "constructor",
"id": [
"langchain_core",
"messages",
"AIMessageChunk"
],
"kwargs": {
"content": "No",
"additional_kwargs": {}
}
}
}
]
],
"llmOutput": {
"estimatedTokenUsage": {
"promptTokens": 2122,
"completionTokens": 1,
"totalTokens": 2123
}
}
}
No description
3 Replies
Marc
Marc9mo ago
Hi @faileon, happy to help debug this do you pass the callback to the invoke? usually with LCEL that's the only thing that's necessary
faileon
faileon9mo ago
Hi, thanks mate, appreaciate it. I am passing the handler to all my invokes and chains,
const langfuseHandler = new CallbackHandler({
secretKey: this.langfuseConfig.secretKey,
publicKey: this.langfuseConfig.publicKey,
baseUrl: this.langfuseConfig.baseUrl,
userId,
});

const llm = new ChatOpenAI({
maxTokens: -1,
temperature: 0.25,
verbose: true,
frequencyPenalty: 0,
presencePenalty: 0,
topP: 1,
streaming: true,
openAIApiKey: this.openAiConfig.key,
modelName,
callbacks: [langfuseHandler],
});


const performQuestionAnswering = async (input: { question: string; context: string; history: Array<BaseMessage> }) => {
const formattedMessage = await this.promptTemplate.format({
question: input.question,
context: input.context,
});

return llm.stream([...input.history, new HumanMessage(formattedMessage)], { callbacks: [langfuseHandler] });
};

const chain = RunnableSequence.from([
{
question: (input: { question: string; chatHistory?: string }) => {
return input.question;
},
},
performQuestionRephrasing,
performContextRetrieval,
performQuestionAnswering,
]);

// create iterable stream
const iterableStream = chain.invoke(
{
question,
},
{
callbacks: [langfuseHandler],
}
);
const langfuseHandler = new CallbackHandler({
secretKey: this.langfuseConfig.secretKey,
publicKey: this.langfuseConfig.publicKey,
baseUrl: this.langfuseConfig.baseUrl,
userId,
});

const llm = new ChatOpenAI({
maxTokens: -1,
temperature: 0.25,
verbose: true,
frequencyPenalty: 0,
presencePenalty: 0,
topP: 1,
streaming: true,
openAIApiKey: this.openAiConfig.key,
modelName,
callbacks: [langfuseHandler],
});


const performQuestionAnswering = async (input: { question: string; context: string; history: Array<BaseMessage> }) => {
const formattedMessage = await this.promptTemplate.format({
question: input.question,
context: input.context,
});

return llm.stream([...input.history, new HumanMessage(formattedMessage)], { callbacks: [langfuseHandler] });
};

const chain = RunnableSequence.from([
{
question: (input: { question: string; chatHistory?: string }) => {
return input.question;
},
},
performQuestionRephrasing,
performContextRetrieval,
performQuestionAnswering,
]);

// create iterable stream
const iterableStream = chain.invoke(
{
question,
},
{
callbacks: [langfuseHandler],
}
);
for example so i am putting the handler to every possible call, including LLM creation. i dont know if that is even necessary or not. My only guess at this point is to try and update langfuse and or langchain, I have noticed there were some improvements regarding token tracking and null output issues i am running the following version of packages currently
"langchain": "^0.0.203",
"langfuse-langchain": "^2.0.0-alpha.1",
"langchain": "^0.0.203",
"langfuse-langchain": "^2.0.0-alpha.1",
and my langfuse server is
ghcr.io/langfuse/langfuse:1.24.2
ghcr.io/langfuse/langfuse:1.24.2
Maya Hee
Maya Hee9mo ago
Hi, I am also facing a similar problem. I followed the LCEL implementation in the documentation by including
{"callbacks" = [handler]}
{"callbacks" = [handler]}
under config when invoking the chain. However in the langfuse tracers UI, I am getting null on every output.