CanaDoug
CanaDoug4mo ago

Hi all, I'm just looking into langfuse

Hi all, I'm just looking into langfuse for the first time and hopefully this somewhat specific question can be an easy fix. I am using node and the openai sdk. I use beta assistant features so my code looks more like this

await openai.beta.threads.messages.create(thread.id, {
role: "assistant",
content: `new message thread`,
});

await openai.beta.threads.messages.create(thread.id, {
role: "assistant",
content: `new message thread`,
});
and if I try to wrap openai with the langfuse observer observeOpenAI(new OpenAI()); it bombs on all the calls because it expects an object instead of that string id being passed in as the first argument to all these beta methods. I couldn't really see any issues in github around this so I'm hoping I'm just being a noob here?
9 Replies
CanaDoug
CanaDoug4mo ago
I poked into the source and it seems a lot of info is pulled from parsing the first argument on calls and it expects that object ot be an openai object. I can prevent the parsing error from happing but it seems I would need to rewrite the whole thing to get all that metadata from somewhere else for these beta methods. Hopefully this already exists somewhere? Or maybe is in the works?
Marc
Marc4mo ago
Assistants/threads APIs aren’t supported as not many langfuse users use them
Marc
Marc4mo ago
Let me know what you think about this and whether you can make this work in JS Honestly, would really appreciate a JS example for this that we can add to docs if you build this for yourself anyway
CanaDoug
CanaDoug4mo ago
Ok thanks, at least I’m not missing something. I will take a look at this. Is there a big hurdle to this you are already aware of or just not had time? I can possibly contribute here if I have some time. It seems like we can instrument some things but don’t have easy access to all the messages for example because the thread state is held server side I don’t see langsmith having an answer to this either
Marc
Marc4mo ago
Issue here is that these APIs are stateful, ie to trace them you need to add additional requests to the OpenAI api. We could instrument it but this would result in some magic under the hood that’s not clear when using the wrapped sdk Overall the assistants api is kind of an LLM-app as a service You just get the responses and would now need to continuously fetch what’s happening under the hood Adoption within the langfuse user base seems tiny as most favor to build this themselves to have more control and less vendor-lockin It’s a great tool though to get started quickly. Very similar to llm application frameworks such as langchain or llamaindex in this regard If you have an idea for a meaningful integration that’d be helpful, please let me know. Happy to discuss
CanaDoug
CanaDoug4mo ago
Yeah I guess you’d have to save the thread ids and assistant ids then in the langfuse app make the calls to fetch the data. It would require an api key be kept in langfuse but that’s not crazy I’ll put some thought into it. Good to know that it’s a very small user base though
Marc
Marc4mo ago
And it’s super OpenAI specific, alternatively the wrapped sdk could do this asynchronously Why do you prefer to use the api? Just curious
CanaDoug
CanaDoug4mo ago
Ease of implementation The team I run for this current assistant project is small and not really skilled for it. So this takes a lot of effort out especially when using tools and augmenting RAG. Then of course we want to dig into the data. I assume openai will soon enough have their own equivalent to this but I needed something now.