Run experiments managing your own model

How to set up an experiment on Humanloop using your own model.

Experiments can be used to compare different prompt templates, different parameter combinations (such as temperature and presence penalties) and even different base models.

This guide focuses on the case where you wish to manage your own model provider calls.

Prerequisites

  • You already have a Prompt — if not, please follow our Prompt creation guide first.
  • You have integrated humanloop.complete_deployed() or the humanloop.chat_deployed() endpoints, along with the humanloop.feedback() with the API or Python SDK.

This guide assumes you’re are using an OpenAI model. If you want to use other providers or your own model please also look at the guide for running an experiment with your own model provider.

Support for other model providers on Humanloop is coming soon.

Create an experiment

Log to your experiment

In order to log data for your experiment without using humanloop.complete_deployed() or humanloop.chat_deployed(), you must first determine which model config to use for your LLM provider calls. This is where the humanloop.experiments.get_model_config() function comes in.

1

Go to your Prompt dashboard

2

Set the experiment as the active deployment.

To do so, find the default environment in the Deployments bar. Click the dropdown menu from the default environment and from those options select Change deployment. In the dialog that opens select the experiment you created.

3

Copy your project_id

From the URL, https://app.humanloop.com/projects/<project_id>/dashboard. The project ID starts with pr_.

4

Alter your existing logging code

To now first sample a model_config from your experiment to use when making your call to OpenAI:

1from humanloop import Humanloop
2import openai
3
4# Initialize the SDK with your Humanloop API key
5humanloop = Humanloop(api_key="<YOUR Humanloop API KEY>")
6
7# Sample a model_config from your experiment.
8model_config_response = humanloop.projects.get_active_config(id=project_id)
9model_config = model_config_response.config
10
11# Make a generation using OpenAI using the parameters from the sampled model_config.
12response = openai.Completion.create(
13 prompt="Answer the following question like Paul Graham from YCombinator:\n"
14 "How should I think about competition for my startup?",
15 model=model_config["model"],
16 temperature=model_config["temperature"],
17)
18
19# Parse the output from the OpenAI response.
20output = response.choices[0].text
21
22# Log the inputs and outputs to the experiment trial associated to the sampled model_config.
23log_response = humanloop.log(
24 project_id=project_id,
25 inputs={"question": "How should I think about competition for my startup?"},
26 output=output,
27 trial_id=model_config["trial_id"],
28)
29
30# Use this ID to associate feedback received later to this log.
31data_id = log_response.id

You can also run multiple experiments within a single project. In this case, first navigate to the Experiments tab of your project and select your Experiment card. Then, retrieve your experiment_id from the experiment summary:

Then, retrieve your model config from your experiment by calling humanloop.experiments.sample(experiment_id=experiment_id).