Manage multiple reviewers

How to split a review between multiple subject-matter experts to effectively and quickly evaluate model outputs.

Who is this for: This guide is for large teams that want to leverage their internal subject-matter experts (SMEs) to evaluate model outputs.

Prerequisites

  • You have set up Evaluators. If not, follow our guide to create a Human Evaluator.
  • You have multiple subject-matter experts (SMEs) available to evaluate model outputs.

Divide work between SMEs

When you have a large dataset to evaluate, it’s helpful to split the work between your SMEs to ensure that the evaluation is completed quickly and effectively.

1

Split the Dataset into chunks

Each Dataset consists of datapoints. You can add an identifier to each datapoint to group them into chunks.

For example, we created a dataset with 100 common customer support questions. In the csv file, we added an identifier called “chunk” to each datapoint, splitting the whole dataset into 10 equal parts.

To upload this CSV on Humanloop, create a new Dataset file, then click on the “Upload CSV” button.

Upload CSV as dataset to Humanloop.

Alternatively, you can upload Dataset via our SDK

2

Run a Evaluation

Navigate to a Prompt you want to evaluate and create a new Evaluation Run.

Run Evals with Dataset on Humanloop.
Example of running an Evaluation on a Prompt
3

Split the workload between SMEs

To split the workload between your SMEs, navigate to the Review tab, turn on Focus mode, and click on the Filters button. Filter the dataset by identifiers, such as “chunk”, to split the review work into smaller pieces.

4

Send the URL to your SMEs

After you have filtered the dataset, copy the URL and send it to your SME. When they open the link, they will only see the relevant chunk of the dataset.

Focus mode on.
The view that SME will see when they open the link. This only shows chunk of the dataset that is relevant to them.
5

Monitor progress

As the SMEs provide judgments on the outputs, we display the overall progress and the number of outstanding judgments. When the final judgment is given, the Evaluation is marked as complete.

Improve the Prompt

With judgments from your SMEs, you can now better understand the model’s performance and iterate on your Prompt to improve the model outputs.

Completed evaluations.
Completed Evalution with judgments from SMEs.

Next steps