Chain logs together (Sessions)
How to trace through a "session" of LLM calls, enabling you to view the full context of actions taken by your LLM agent and troubleshoot issues.
Under Development
This content is currently under development. Please refer to our V4 documentation for the current docs.
This guide will show you how to trace through “sessions” of Prompt calls, Tool calls and other events in your AI application.
You can see an example below for a simple LLM chain, an Agent and a RAG pipeline.
Tracing a simple LLM chain
Prerequisites
Given a user request, the code does the following:
To set up your local environment to run this script, you will need to have installed Python 3 and the following libraries:
pip install openai google-search-results
.
Send logs to Humanloop
To send logs to Humanloop, we’ll install and use the Humanloop Python SDK.
Initialize the Humanloop client
Add the following lines to the top of the example file. (Get your API key from your Organisation Settings page)
Use Humanloop to fetch the moderator response
This automatically sends the logs to Humanloop
Replace your openai.ChatCompletion.create()
call under # Check for abuse
with a humanloop.chat()
call.
Instead of replacing your model call with humanloop.chat()
you can
alternatively add a humanloop.log()
call after your model call. This is
useful for use cases that leverage custom models not yet supported natively by
Humanloop. See our Using your own model guide
for more information.
You have now connected your multiple calls to Humanloop, logging them to individual projects. While each one can be inspected individually, we can’t yet view them together to evaluate and improve our pipeline.
Post logs to a session
To view the logs for a single user_request
together, we can log them to a session. This requires a simple change of just passing in the same session id to the different calls.
Final example script
This is the updated version of the example script above with Humanloop fully integrated. Running this script yields sessions that can be inspected on Humanloop.
Nesting logs within a session [Extension]
A more complicated trace involving nested logs, such as those recording an Agent’s behaviour, can also be logged and viewed in Humanloop.
First, post a log to a session, specifying both session_reference_id
and reference_id
. Then, pass in this reference_id
as parent_reference_id
in a subsequent log request. This indicates to Humanloop that this second log should be nested under the first.
Deferred output population
In most cases, you don’t know the output for a parent log until all of its children have completed. For instance, the root-level Agent will spin off multiple LLM requests before it can retrieve an output. To support this case, we allow logging without an output. The output can then be updated after the session is complete with a separate humanloop.logs_api.update_by_reference_id(reference_id, output)
call.