GuidesGenerate and Log

Logging

Integrating Humanloop and running an experiment when using your own models.

The humanloop.complete()and humanloop.chat() call encapsulates the LLM provider calls (for example openai.Completions.create()), the model-config selection and logging steps in a single unified interface. There may be scenarios that you wish to manage the LLM provider calls directly in your own code instead of relying on Humanloop.

For example, you may be using an LLM provider that currently is not directly supported by Humanloop such as Hugging Face.

To support using your own model provider, we provide additional humanloop.log() and humanloop.projects.get_active_config() methods in the SDK.

In this guide, we walk through how to use these SDK methods to log data to Humanloop and run experiments.

Prerequisites

  • You already have a Prompt — if not, please follow our Prompt creation guide first.

Log data to your project

Set up your code to first get your model config from Humanloop, then call your LLM provider to get a completion (or chat response) and then log this, alongside the inputs, config and output:

1from humanloop import Humanloop
2import openai
3
4# Initialize Humanloop with your API key
5humanloop = Humanloop(api_key="<YOUR Humanloop API KEY>")
6
7project_id = "<YOUR PROJECT ID>"
8
9config = humanloop.projects.get_active_config(id=project_id).config
10
11client = openai.OpenAI(
12 # defaults to os.environ.get("OPENAI_API_KEY")
13 api_key="<YOUR OPENAI API KEY>",
14)
15
16messages = [
17 {
18 "role": "user",
19 "content": "Say this is a test",
20 }
21]
22
23chat_completion = client.chat.completions.create(
24 messages=messages,
25 model=config.model,
26 temperature=config.temperature
27)
28
29# Parse the output from the OpenAI response.
30output = chat_completion.choices[0].message.content
31
32# Log the inputs, outputs and config to your project.
33log_response = humanloop.log(
34 project_id=project_id,
35 messages=messages
36 output=output,
37 config_id=config.id
38)
39
40# Use this ID to associate feedback received later to this datapoint.
41data_id = log_response.id

The process of capturing feedback then uses the returned data_id as before.

See our guide on capturing user feedback.

You can also log immediate feedback alongside the input and outputs:

# Log the inputs, outputs and model config to your project.
log_response = humanloop.log(
project_id=project_id,
messages=messages
output=output,
config_id=config.id,
feedback={"type": "rating", "value": "good"}
)
Hugging Face Example

Note that you can also use a similar pattern for non-OpenAI LLM providers. For example, logging results from Hugging Face’s Inference API:

1import requests
2from humanloop import Humanloop
3
4# Initialize the SDK with your Humanloop API key
5humanloop = Humanloop(api_key="<YOUR Humanloop API KEY>")
6
7# Make a generation using the Hugging Face Inference API.
8response = requests.post(
9 "https://api-inference.huggingface.co/models/gpt2",
10 headers={"Authorization": f"Bearer {<YOUR HUGGING FACE API TOKEN>}"},
11 json={
12 "inputs": "Answer the following question like Paul Graham from YCombinator:\n"
13 "How should I think about competition for my startup?",
14 "parameters": {
15 "temperature": 0.2,
16 "return_full_text": False, # Otherwise, Hugging Face will return the prompt as part of the generation.
17 },
18 },
19).json()
20
21# Parse the output from the Hugging Face response.
22
23output = response[0]["generated_text"]
24
25# Log the inputs, outputs and model config to your project.
26
27log_response = humanloop.log(
28 project=project_id,
29 inputs={"question": "How should I think about competition for my startup?"},
30 output=output,
31 model_config={
32 "model": "gpt2",
33 "prompt_template": "Answer the following question like Paul Graham from YCombinator:\n{{question}}",
34 "temperature": 0.2,
35},
36)