gokusan
gokusanβ€’14mo ago

Async Python

I hav a new issue with tracing making my asynchronous program hang. I run my functions/code using asyncio.run(main()) where main is an async function that calls an API and does trace(CreateTrace Without tracing, asyncio.run(main()) runs without problem. When I add tracing with langfuse, asyncio.run(main()) hangs forever. Not sure if this is a noob mistake on my part or...
28 Replies
Marc
Marcβ€’14mo ago
Hi Gokusan, thanks for reporting. We’re currently making improvements to the Python SDK which also uses asyncio. Will investigate and report back Tagging @Max
Max
Maxβ€’14mo ago
Hi gokusan, which SDK verision are you using? The latest one is 0.0.72 Do you have the same issue with this one? Does it hang forever when you try to quit the application?
gokusan
gokusanβ€’14mo ago
langfuse_client.flush() I was missing this πŸ˜‰
Max
Maxβ€’14mo ago
Interesting! Do you execute it in a cloud function? If you run a regular e.g. FastAPI server, it should work without flushing πŸ™‚
gokusan
gokusanβ€’14mo ago
I executed locally in my IDE with langfuse running on docker
l>>> langfuse.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'langfuse' has no attribute '__version__'
l>>> langfuse.__version__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'langfuse' has no attribute '__version__'
another little thingy im using langfuse==0.0.66
Max
Maxβ€’14mo ago
Thanks! Can you try the latest version? 0.0.72
gokusan
gokusanβ€’14mo ago
this new update completely broke my workflows. It's not tracing anything anymore. It seems it has to do with your refactoring of langfuse.api/.model. Even after refactoring my code and importing from the new module dir, my code is not tracing anything. I had to downgrade back to my previous version .66 to make it work again. Not sure if Im the only one affected, but I'd highly suggest adding proper testing to the library. Your current test only check that the "tracing code" runs without exceptions. There are 0 assertions/validation on whether that tracing actually made it to the backend Do you guys plan on making your testing pipeline/repo more robust? I really like the tool but have been dealing with langfuse bugs nearly every day since integration instad of focusing on my product. Not sure if anyone else has these issues given the commit frequency in the repo, If your tests are not properly setup, users will always end up finding out bugs for you, instead of catching them at the source, making the tool a pain to integrate and actually deploy/use
Max
Maxβ€’14mo ago
Hi gokusan, thanks for the candid feedback! As you have noticed, we are currently investing to improve the SDK. We planned to add some proper tests and also making drastic changes in the API to reduce complexity in the SDKs. I feel very sorry, that you ran into these issues. Did you see any exceptions or anything that could tell me what was going on? Thanks!
gokusan
gokusanβ€’14mo ago
happy to provide feedback it it helps the product. No exceptions just silent failing when running a basic trace in python. The issue was that the trace never actually got sent to the backend a simple tracing example should show the issue
Max
Maxβ€’14mo ago
are you using the async framework? If yes, the langfuse.trace function needs to be awaited. With async i mean LangfuseAsync(..) Thanks!
gokusan
gokusanβ€’14mo ago
nop using sync The same code I wrote in 0.66 doesn't work in 0.72, after refactoring for the new folder hierarchy so don't think its issue with how im calling the trace, unless you changed you but didn't seem like it on the commit log
Max
Maxβ€’14mo ago
No, no changes on that. I will look into this. Thanks for letting us know and sorry for the inconvenience!
gokusan
gokusanβ€’14mo ago
no worries. dt be sorry! just realized i needed to set logging basic config to INFO to see langfuse logs.
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 77, in _worker
work_item.run()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/othmanezoheir/venv/llm_jobs/lib/python3.9/site-packages/langfuse/task_manager.py", line 86, in _execute_task
logging.info(f"Task {task.task_id} done with result {result}")
Message: "Task 6c3980e5-e059-46b6-b413-6f297c4b4fc0 done with result id='6c3980e5-e059-46b6-b413-6f297c4b4fc0' trace_id='clkvxf01m0078mm082m9urxg0' type='GENERATION' name='Assistant response' start_time=datetime.datetime(2023, 8, 4, 1, 47, 51, 225000, tzinfo=datetime.timezone.utc) end_time=datetime.datetime(2023, 8, 4, 1, 47, 52, 25000, tzinfo=datetime.timezone.utc) completion_start_time=None model='gpt-3.5-turbo' model_parameters={'n': '1', 'maxTokens': '400', 'total_cost': '5.2000000000000004e-05', 'temperature': '0', 'request_cost': '7.649999999999999e-05'} prompt=None metadata={'id': 'chatcmpl-7jeCteBqPvJU2rEWjX5Gbsw4rLRKx', 'model': 'gpt-3.5-turbo-0613', 'usage': {'total_tokens': 49, 'prompt_tokens': 43, 'completion_tokens': 6}, 'object': 'chat.completion', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Hoy es lunes.'}, 'finish_reason': 'stop'}], 'created': 1691113671, 'cost_summary': {'request_cost': 7.649999999999999e-05, 'total_tokens': 49, 'prompt_tokens': 43, 'completion_tokens': 6}} completion=None usage=None level=<ObservationLevelGeneration.DEFAULT: 'DEFAULT'> status_message=None parent_observation_id=None"
Arguments: ()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 77, in _worker
work_item.run()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/othmanezoheir/venv/llm_jobs/lib/python3.9/site-packages/langfuse/task_manager.py", line 86, in _execute_task
logging.info(f"Task {task.task_id} done with result {result}")
Message: "Task 6c3980e5-e059-46b6-b413-6f297c4b4fc0 done with result id='6c3980e5-e059-46b6-b413-6f297c4b4fc0' trace_id='clkvxf01m0078mm082m9urxg0' type='GENERATION' name='Assistant response' start_time=datetime.datetime(2023, 8, 4, 1, 47, 51, 225000, tzinfo=datetime.timezone.utc) end_time=datetime.datetime(2023, 8, 4, 1, 47, 52, 25000, tzinfo=datetime.timezone.utc) completion_start_time=None model='gpt-3.5-turbo' model_parameters={'n': '1', 'maxTokens': '400', 'total_cost': '5.2000000000000004e-05', 'temperature': '0', 'request_cost': '7.649999999999999e-05'} prompt=None metadata={'id': 'chatcmpl-7jeCteBqPvJU2rEWjX5Gbsw4rLRKx', 'model': 'gpt-3.5-turbo-0613', 'usage': {'total_tokens': 49, 'prompt_tokens': 43, 'completion_tokens': 6}, 'object': 'chat.completion', 'choices': [{'index': 0, 'message': {'role': 'assistant', 'content': 'Hoy es lunes.'}, 'finish_reason': 'stop'}], 'created': 1691113671, 'cost_summary': {'request_cost': 7.649999999999999e-05, 'total_tokens': 49, 'prompt_tokens': 43, 'completion_tokens': 6}} completion=None usage=None level=<ObservationLevelGeneration.DEFAULT: 'DEFAULT'> status_message=None parent_observation_id=None"
Arguments: ()
This is the log when I: initialize SYNC langfuse client send a trace, send observations (they log to the backend OK) but the program hangs forever after that I've pushed the code leading to the issue to this branch. It's a library im building for my personal llm stuff. https://github.com/zozoheir/tinyllm/tree/bug/tracing_hang/tinyllm/examples: openai_chat.py hangs forever. The tracing happens insde OpenAIChat through self.trace. The tracing.py example works fine. I forgot to parametrize langfuse api keys but shld be ok. This is on v0.66, since .72 I couldn't load the modules. Am I using it the wrong way in the OpenAIChat class?
Max
Maxβ€’14mo ago
Yes, these are indeed our logs., I just cloned your repo and tried to get it running. What is your recommended way of getting the example running? First step from my side would be to upgrade to 0.0.75 as i caught some bugs around not closing threads recently. FYI, I also saw that you expose your secret key in the repo. Can you let it run with the INFO logs on 75 or let me know how i can run the repo myself with ther upgraded version?
gokusan
gokusanβ€’14mo ago
will do later today!