Log to a Prompt
How to log generations from any large language model (LLM) to Humanloop
This guide will show you how to capture the Logs of your LLM calls into Humanloop.
The easiest way to log LLM generations to Humanloop is to use the Prompt.call()
method (see the guide on Calling a Prompt). You will only need to supply prompt ID and the inputs needed by the prompt template, and the endpoint will handle fetching the latest template, making the LLM call and logging the result.
However, there may be scenarios that you wish to manage the LLM provider calls directly in your own code instead of relying on Humanloop. For example, you may be using an LLM provider that is not directly supported by Humanloop such as a custom self-hosted model, or you may want to avoid adding Humanloop to the critical path of the LLM API calls.
Prerequisites
- You already have a Prompt — if not, please follow our Prompt creation guide first.
Log data to your Prompt
To log LLM generations to Humanloop, you will need to make a call to the /prompts/log
endpoint.
Note that you can either specify a version of the Prompt you are logging against - in which case you will need to take care that you are supplying the correct version ID and inputs. Or you can supply the full prompt and a new version will be created if it has not been seen before.
Get your Prompt
Fetch a Prompt from Humanloop by specifying the ID. You can ignore this step if your prompts are created dynamically in code.
Here’s how to do this in code: