z.o.rro
z.o.rroβ€’3mo ago

Hmm, I could store in trace metadata,

Hmm, I could store in trace metadata, but it is kinda linked to the score more. (Since score is kinda like user feedback?) I'd prefer if I could store some additional attributes apart from just the comment in the score, I would like to capture the ideal answer for an answer by a RAG application by asking the users to evaluate my application. Example payload would be like:
{
feedback: 'this doesn't seem so great, it should further include so and so',
idealAnswer: 'this is what the ideal answer should look like'
}
{
feedback: 'this doesn't seem so great, it should further include so and so',
idealAnswer: 'this is what the ideal answer should look like'
}
21 Replies
Corrius
Corriusβ€’3mo ago
I think this would be very useful and something that we'll use too
Marc
Marcβ€’3mo ago
yep, i think having some form of suggested "correction" of a trace would be awesome
Marc
Marcβ€’3mo ago
GitHub
Add a "correction" field on the user feedback info Β· langfuse Β· Dis...
Describe the feature or potential improvement When making considerations as an user feedback, would be nice to have a text field to propose a correct answer for the LLM. This corrections could be u...
z.o.rro
z.o.rroβ€’5w ago
Any updates on this? @Marc
Marc
Marcβ€’5w ago
Not yet. Still agree that score metadata that can be used via the api would be useful Are you interested in contributing this by any chance?
z.o.rro
z.o.rroβ€’5w ago
Sure, otherwise I'd be forced to connect a db in the application I'm building at work to store feedback, and personally I don't like the idea of storing this data separate from langfuse traces πŸ˜›
Marc
Marcβ€’5w ago
that'd be great, do you want to give this a try and let me know in case you have any questions? love the idea of not having another db we currently work through many improvements, might get to score metadata as well myself soon
z.o.rro
z.o.rroβ€’5w ago
I'm kinda amateur in nextjs, but would love to give it a shot πŸ˜„ Let me know what all needs to be done to add this feature
Marc
Marcβ€’5w ago
maybe this is so simple that it is faster if i add it, you need to - add to db schema (prisma) - add to api schemas (zod) - add to tests - add to openapi spec (managed via fern in fern folder and then generated with their cli, see contributing) - render metadata in scores tables, similar to the comment. needs to be returned by the frontend api as well to populate the table If you think this could be fun feel free to give it a go, otherwise this will not take me too long and i can try to add this soonish
z.o.rro
z.o.rroβ€’5w ago
give me until this weekend to open a PR
Marc
Marcβ€’5w ago
sounds good, ping me in case you have questions or get stuck. this is for sure a great end to end pr to feel comfortable in the langfuse repo
z.o.rro
z.o.rroβ€’5w ago
cool, i'll post questions as I tackle it piece by piece leading up to the weekend πŸ™Œ
Marc
Marcβ€’5w ago
Sounds good. The contributing file is a good starting point
z.o.rro
z.o.rroβ€’4w ago
Revisiting this feature request: So initially I thought I can add ideal answer response as part of scoring Now I'm wondering if I should rather save the ideal answer as a dataset item, because that way I can have all the improvements that need to be fed back into the RAG available separately in the Langfuse UI And probably save the ideal answer as part of the metadata associated of the dataset item And to keep track of where the ideal answer came from I can link to the original trace Hmm, slightly confused after watching the demo video whether this is the right approach
Marc
Marcβ€’4w ago
I think adding expected responses as dataset items makes sense if you then want to run tests on these or download them. Happy to chat about this in case you’re interested: https://cal.com/marc-kl/15