Marcos Duarte
Marcos Duarte
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
this worked like a charm! 👌
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
great! thx again @Marc
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
thx @Marc ! btw, Is it possible to customize the high level trace name and metadata using the callback handler?
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
solved! 🙂
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
yep, that was it...
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
No description
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
I'm guessing i have to use LCEL?
32 replies
LLangfuse
Created by Marcos Duarte on 12/13/2023 in #get-support
Langchain Output Parsers
thx Marc! the callback handler works well with the LLMChain class? that's how I'm creating the chain:
class WForceConversationSummarizationTask:
def get_chain(self):
template = """Read a support chat between a customer and a support agent. \
Based on the informations given in the chat, answer in a few words (max. 80) what is the main subject of the conversation,\
stay conscious and be precise, explain what was defined and the result of the service without inferring information. At the end, give a list of 5 keywords which address the main topics, separated by comma. \
You must answer in Portuguese, with only a json containing two attributes: text and keywords.\
The suport chat data is: {question} \
"""

prompt = PromptTemplate(template=template, input_variables=["question"])

return LLMChain(
llm=AzureChatOpenAI(openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"], azure_deployment=os.environ["AZURE_GPT35_DEPLOYMENT_NAME"], temperature=0.3),
prompt=prompt,
output_parser=WForceConversationSummarizationOutputParser()
)
class WForceConversationSummarizationTask:
def get_chain(self):
template = """Read a support chat between a customer and a support agent. \
Based on the informations given in the chat, answer in a few words (max. 80) what is the main subject of the conversation,\
stay conscious and be precise, explain what was defined and the result of the service without inferring information. At the end, give a list of 5 keywords which address the main topics, separated by comma. \
You must answer in Portuguese, with only a json containing two attributes: text and keywords.\
The suport chat data is: {question} \
"""

prompt = PromptTemplate(template=template, input_variables=["question"])

return LLMChain(
llm=AzureChatOpenAI(openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"], azure_deployment=os.environ["AZURE_GPT35_DEPLOYMENT_NAME"], temperature=0.3),
prompt=prompt,
output_parser=WForceConversationSummarizationOutputParser()
)
And that's how I'm invoking it:
task = WForceConversationSummarizationTask()
response = task.get_chain().invoke(input={"question": conversation}, config={"callbacks": [CallbackHandler()]})
task = WForceConversationSummarizationTask()
response = task.get_chain().invoke(input={"question": conversation}, config={"callbacks": [CallbackHandler()]})
The tracing looks like this
32 replies
LLangfuse
Created by Marcos Duarte on 12/5/2023 in #feature-suggestion
Roadmap
That's great! Thx Marc!
4 replies