Ryan Ribeiro
Ryan Ribeiro7mo ago

hello, I have an application in which I

hello, I have an application in which I am building a base class for a task, this application will relate to an llm and can use both langchain and llamaindex, and I would like to have a lot of control over the langfuse traces so I need to go down the level. For example, a problem would be the langchain chains in which they concentrate a runnablesequence and the invoke triggers this sequence, but how would I treat the spans in this case since the processes occur hidden? Is the best way to not use chains in this case?
2 Replies
Marc
Marc7mo ago
honestly, I don't fully understand your setup. Happy to try to help though. Have you seen the post of the pre-release version of the llamaindex integration? Can you share an example trace of llamaindex + langchain and your thoughts on what should be different?
Ryan Ribeiro
Ryan Ribeiro7mo ago
sorry for confusion, i will try to be more clear, I would basically like to not simply pass a callback to the invoke function for example to capture the traces, but rather create my own captures with the spans involving the desired functions. So that tracing is not limited to the framework but can be made explicit in the processes. for example, you have a chain prompt | model | output parser then you call getChain().invoke(input=....,callback=callback) I would like to do it this way: span() prompt = ..... span().end span() model = .... span().end span() (here would be a generation too) model(prompt,input).completion() span().end I thought about using langfuseBaseCallback to expand my own callback class or actually not using chain structures and making function calls explicit if you have any other ideas on how to get this result or maybe what I'm trying doesn't make sense