elsatch
elsatch
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
No description
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
Not done yet! Things I've discovered: - I was able to create traces when NOT using the Haystack integration. In particular, I managed to trace using OpenAI, Ollama, LiteLLM without problems using the OpenAI SDK compatiblity of Langfuse. So it looks like the problem lies in the integration.
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
I am done for today. Tried two computers, different OSs, my own code, two cookbook examples, self-hosted langfuse, cloud langfuse and a total of zero traces containing info using the Haystack-Langfuse integration.
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
No description
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
Second computer using Windows OS instead of Linux. Installed environment from scratch using:
pip install haystack-ai langfuse-haystack langfuse sentence-transformers datasets mwparserfromhell torch==2.3.1
pip install haystack-ai langfuse-haystack langfuse sentence-transformers datasets mwparserfromhell torch==2.3.1
Torch 2.4.0 returns an error about version not found, but 2.3.1 works. When launching the default script from Haystack example:
import os

os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com"
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] = "true"
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-4..."
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-3..."
os.environ["OPENAI_API_KEY"] = "sk-proj-N..."

from haystack.components.builders import DynamicChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack import Pipeline

from haystack_integrations.components.connectors.langfuse import LangfuseConnector

if __name__ == "__main__":
pipe = Pipeline()
pipe.add_component("tracer", LangfuseConnector("Chat example"))
pipe.add_component("prompt_builder", DynamicChatPromptBuilder())
pipe.add_component("llm", OpenAIChatGenerator(model="gpt-4o-mini"))

pipe.connect("prompt_builder.prompt", "llm.messages")

messages = [
ChatMessage.from_system("Always respond in German even if some input data is in other languages."),
ChatMessage.from_user("Tell me about {{location}}"),
]

response = pipe.run(
data={"prompt_builder": {"template_variables": {"location": "Berlin"}, "prompt_source": messages}}
)
print(response["llm"]["replies"][0])
print(response["tracer"]["trace_url"])
import os

os.environ["LANGFUSE_HOST"] = "https://cloud.langfuse.com"
os.environ["TOKENIZERS_PARALLELISM"] = "false"
os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] = "true"
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-4..."
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-3..."
os.environ["OPENAI_API_KEY"] = "sk-proj-N..."

from haystack.components.builders import DynamicChatPromptBuilder
from haystack.components.generators.chat import OpenAIChatGenerator
from haystack.dataclasses import ChatMessage
from haystack import Pipeline

from haystack_integrations.components.connectors.langfuse import LangfuseConnector

if __name__ == "__main__":
pipe = Pipeline()
pipe.add_component("tracer", LangfuseConnector("Chat example"))
pipe.add_component("prompt_builder", DynamicChatPromptBuilder())
pipe.add_component("llm", OpenAIChatGenerator(model="gpt-4o-mini"))

pipe.connect("prompt_builder.prompt", "llm.messages")

messages = [
ChatMessage.from_system("Always respond in German even if some input data is in other languages."),
ChatMessage.from_user("Tell me about {{location}}"),
]

response = pipe.run(
data={"prompt_builder": {"template_variables": {"location": "Berlin"}, "prompt_source": messages}}
)
print(response["llm"]["replies"][0])
print(response["tracer"]["trace_url"])
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
No description
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
Advances and checks so far: - I have tried adding flushing to Langfuse to see if it made any difference. Still returns nulls. - I have tried switching from LlamaCppChatGenerator to OpenAIChatGenerator, querrying local Ollama as an OpenAI compatible endpoint. Still returns nulls. - I have tried switching from OpenAIChatGenerator calling Ollama to OpenAIChatGenerator calling GPT-4o Mini at OpenAI endpoint. Still returns nulls in the traces. - I have tried switching from local Langfuse to Cloud Langfuse. I am still getting nulls in my traces. So, at this point I am quite sure there is something wrong in my code 🙂
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
Just to clarify, I am still getting nulls in the output
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
I have modified my code to add flushing without any significant difference. As a reference to add manual flushing:
# import low level SDK
from langfuse import Langfuse

# start the client
langfuse = Langfuse()

# your code goes here

# At the end of your program add langfuse.shutdown() or...
langfuse.flush()

# Additional info at: https://langfuse.com/docs/tracing#manual-flushing
# import low level SDK
from langfuse import Langfuse

# start the client
langfuse = Langfuse()

# your code goes here

# At the end of your program add langfuse.shutdown() or...
langfuse.flush()

# Additional info at: https://langfuse.com/docs/tracing#manual-flushing
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
I will try flushing then. Copying the reference information from the tracing: If you want to send a batch immediately, you can call the flush method on the client. In case of network issues, flush will log an error and retry the batch, it will never throw an exception. Decorator from langfuse.decorators import langfuse_context langfuse_context.flush() low-level SDK langfuse.flush() If you exit the application, use shutdown method to make sure all requests are flushed and pending requests are awaited before the process exits. On success of this function, no more events will be sent to Langfuse API. langfuse.shutdown()
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
If anyone has faced this issue or could offer any guidance about how to solve it, I would appreciate it
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
Outputs are printed to console properly and traces are recorded on the langfuse side... but empty
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
Code continues:
questions = ["What is the value in Ohms of a resistor with the following color codes: red, red, orange?",
"How many musketeers were there?",]

for question in questions:
result = batch_qa_pipeline.run(
{
"prompt_builder": {"question": question},
"llm": {"generation_kwargs": {"max_tokens": 128, "temperature": 0.1}},
"answer_builder": {"query": question},
}
)

generated_answer = result["answer_builder"]["answers"][0]
print(generated_answer.data)
questions = ["What is the value in Ohms of a resistor with the following color codes: red, red, orange?",
"How many musketeers were there?",]

for question in questions:
result = batch_qa_pipeline.run(
{
"prompt_builder": {"question": question},
"llm": {"generation_kwargs": {"max_tokens": 128, "temperature": 0.1}},
"answer_builder": {"query": question},
}
)

generated_answer = result["answer_builder"]["answers"][0]
print(generated_answer.data)
18 replies
LLangfuse
Created by elsatch on 7/26/2024 in #get-support
Null values when using Haystack integration
This is the code I'm using:
#!/usr/bin/env python3
import os
from datasets import load_dataset

from haystack import Pipeline
from haystack_integrations.components.generators.llama_cpp import LlamaCppChatGenerator

from haystack.components.builders import ChatPromptBuilder
from haystack.components.builders.answer_builder import AnswerBuilder

from haystack.dataclasses import ChatMessage

os.environ["LANGFUSE_HOST"] = "http://LOCAL_NETWORK_IP:3000"
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..."
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..."

# Note: You must setup this variable before importing the LangfuseConnector

os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] = "true"

from haystack_integrations.components.connectors.langfuse import LangfuseConnector

system_message = ChatMessage.from_system(
"""
Answer the question as briefly as possible. If the answer is a number, provide the number only.
"""
)

user_message = ChatMessage.from_user("Question: {{question}}")
assistent_message = ChatMessage.from_assistant("Answer: ")

chat_template = [system_message, user_message, assistent_message]

batch_qa_pipeline = Pipeline()

generator = LlamaCppChatGenerator(
model="./models/llama-3-neural-chat-v1-8b-Q5_K_M.gguf",
n_ctx=512,
n_batch=128,
model_kwargs={"n_gpu_layers": -1},
)

generator.warm_up()

batch_qa_pipeline.add_component("tracer", LangfuseConnector("Batch QA"))
batch_qa_pipeline.add_component(instance=ChatPromptBuilder(template=chat_template), name="prompt_builder")
batch_qa_pipeline.add_component(instance=generator, name="llm")
batch_qa_pipeline.add_component(instance=AnswerBuilder(), name="answer_builder")

batch_qa_pipeline.connect("prompt_builder", "llm")
batch_qa_pipeline.connect("llm", "answer_builder")
#!/usr/bin/env python3
import os
from datasets import load_dataset

from haystack import Pipeline
from haystack_integrations.components.generators.llama_cpp import LlamaCppChatGenerator

from haystack.components.builders import ChatPromptBuilder
from haystack.components.builders.answer_builder import AnswerBuilder

from haystack.dataclasses import ChatMessage

os.environ["LANGFUSE_HOST"] = "http://LOCAL_NETWORK_IP:3000"
os.environ["LANGFUSE_SECRET_KEY"] = "sk-lf-..."
os.environ["LANGFUSE_PUBLIC_KEY"] = "pk-lf-..."

# Note: You must setup this variable before importing the LangfuseConnector

os.environ["HAYSTACK_CONTENT_TRACING_ENABLED"] = "true"

from haystack_integrations.components.connectors.langfuse import LangfuseConnector

system_message = ChatMessage.from_system(
"""
Answer the question as briefly as possible. If the answer is a number, provide the number only.
"""
)

user_message = ChatMessage.from_user("Question: {{question}}")
assistent_message = ChatMessage.from_assistant("Answer: ")

chat_template = [system_message, user_message, assistent_message]

batch_qa_pipeline = Pipeline()

generator = LlamaCppChatGenerator(
model="./models/llama-3-neural-chat-v1-8b-Q5_K_M.gguf",
n_ctx=512,
n_batch=128,
model_kwargs={"n_gpu_layers": -1},
)

generator.warm_up()

batch_qa_pipeline.add_component("tracer", LangfuseConnector("Batch QA"))
batch_qa_pipeline.add_component(instance=ChatPromptBuilder(template=chat_template), name="prompt_builder")
batch_qa_pipeline.add_component(instance=generator, name="llm")
batch_qa_pipeline.add_component(instance=AnswerBuilder(), name="answer_builder")

batch_qa_pipeline.connect("prompt_builder", "llm")
batch_qa_pipeline.connect("llm", "answer_builder")
18 replies