Fine-tune a model

In this guide we will demonstrate how to use Humanloop’s fine-tuning workflow to produce improved models leveraging your user feedback data.

Paid Feature

This feature is not available for the Free tier. Please contact us if you wish to learn more about our Enterprise plan

Prerequisites

  • You already have a Prompt — if not, please follow our Prompt creation guide first.
  • You have integrated humanloop.complete_deployed() or the humanloop.chat_deployed() endpoints, along with the humanloop.feedback() with the API or Python SDK.

A common question is how much data do I need to fine-tune effectively? Here we can reference the OpenAI guidelines:

The more training examples you have, the better. We recommend having at least a couple hundred examples. In general, we’ve found that each doubling of the dataset size leads to a linear increase in model quality.

Fine-tuning

The first part of fine-tuning is to select the data you wish to fine-tune on.

1

Go to your Humanloop project and navigate to Logs tab.

2

Create a filter

Using the + Filter button above the table of the logs you would like to fine-tune on.

For example, all the logs that have received a positive upvote in the feedback captured from your end users.

3

Click the Actions button, then click the New fine-tuned model button to set up the finetuning process.

4

Enter the appropriate parameters for the fine-tuned model.

  1. Enter a Model name. This will be used as the suffix parameter in OpenAI’s fine-tune interface. For example, a suffix of “custom-model-name” would produce a model name like ada:ft-your-org:custom-model-name-2022-02-15-04-21-04.
  2. Choose the Base model to fine-tune. This can be ada, babbage, curie, or davinci.
  3. Select a Validation split percentage. This is the proportion of data that will be used for validation. Metrics will be periodically calculated against the validation data during training.
  4. Enter a Data snapshot name. Humanloop associates a data snapshot to every fine-tuned model instance so it is easy to keep track of what data is used (you can see yourexisting data snapshots on the Settings/Data snapshots page)
5

Click Create

The fine-tuning process runs asynchronously and may take up to a couple of hours to complete depending on your data snapshot size.

6

See the progress

Navigate to the Fine-tuning tab to see the progress of the fine-tuning process.

Coming soon - notifications for when your fine-tuning jobs have completed.

7

When the Status of the fine-tuned model is marked as Successful, the model is ready to use.

🎉 You can now use this fine-tuned model in a Prompt and evaluate its performance.