Logging
Integrating Humanloop and running an experiment when using your own models.
The humanloop.complete()
and humanloop.chat()
call encapsulates the LLM provider calls (for example openai.Completions.create()
), the model-config selection and logging steps in a single unified interface. There may be scenarios that you wish to manage the LLM provider calls directly in your own code instead of relying on Humanloop.
For example, you may be using an LLM provider that currently is not directly supported by Humanloop such as Hugging Face.
To support using your own model provider, we provide additional humanloop.log()
and humanloop.projects.get_active_config()
methods in the SDK.
In this guide, we walk through how to use these SDK methods to log data to Humanloop and run experiments.
Prerequisites
- You already have a Prompt — if not, please follow our Prompt creation guide first.
Log data to your project
Set up your code to first get your model config from Humanloop, then call your LLM provider to get a completion (or chat response) and then log this, alongside the inputs, config and output:
The process of capturing feedback then uses the returned data_id
as before.
See our guide on capturing user feedback.
You can also log immediate feedback alongside the input and outputs:
Hugging Face Example
Note that you can also use a similar pattern for non-OpenAI LLM providers. For example, logging results from Hugging Face’s Inference API: