How to log generations from any large language model (LLM) to Humanloop

This guide will show you how to capture the Logs of your LLM calls into Humanloop.

The easiest way to log LLM generations to Humanloop is to use the Prompt.call() method (see the guide on Calling a Prompt). You will only need to supply prompt ID and the inputs needed by the prompt template, and the endpoint will handle fetching the latest template, making the LLM call and logging the result.

However, there may be scenarios that you wish to manage the LLM provider calls directly in your own code instead of relying on Humanloop. For example, you may be using an LLM provider that is not directly supported by Humanloop such as a custom self-hosted model, or you may want to avoid adding Humanloop to the critical path of the LLM API calls.

Prerequisites

  • You already have a Prompt — if not, please follow our Prompt creation guide first.

Log data to your Prompt

To log LLM generations to Humanloop, you will need to make a call to the /prompts/log endpoint.

Note that you can either specify a version of the Prompt you are logging against - in which case you will need to take care that you are supplying the correct version ID and inputs. Or you can supply the full prompt and a new version will be created if it has not been seen before.

1

Get your Prompt

Fetch a Prompt from Humanloop by specifying the ID. You can ignore this step if your prompts are created dynamically in code.

Here’s how to do this in code:

1from humanloop import Humanloop, prompt_utils
2
3PROMPT_ID = "<Your Prompt ID>"
4
5hl = Humanloop(api_key="<Your Humanloop API Key>")
6
7prompt = hl.prompts.get(id=PROMPT_ID)
8
9# This will fill the prompt template with the variables
10template = prompt_utils.populate_template(prompt.template, {"language": "Python"})
2

Call your Prompt

This can be your own model, or any other LLM provider. Here is an example of calling OpenAI:

1import openai
2
3client = openai.OpenAI(api_key="<Your OpenAI API Key>")
4
5messages = template + [{"role": "user", "content": "explain how async works"}]
6
7chat_completion = client.chat.completions.create(
8 messages=messages, model=prompt.model, temperature=prompt.temperature
9)
3

Log the result

Finally, log the result to your project:

1# Parse the output from the OpenAI response.
2output_message = chat_completion.choices[0].message
3
4# Log the inputs, outputs and config to your project.
5log = hl.prompts.log(
6 id=PROMPT_ID,
7 output_message=output_message,
8 messages=messages,
9)