Manage multiple reviewers
How to split a review between multiple subject-matter experts to effectively and quickly evaluate model outputs.
Who is this for: This guide is for large teams that want to leverage their internal subject-matter experts (SMEs) to evaluate model outputs.
Prerequisites
- You have set up Evaluators. If not, follow our guide to create a Human Evaluator.
- You have multiple subject-matter experts (SMEs) available to evaluate model outputs.
Divide work between SMEs
When you have a large dataset to evaluate, it’s helpful to split the work between your SMEs to ensure that the evaluation is completed quickly and effectively.
Split the Dataset into chunks
Each Dataset consists of datapoints. You can add an identifier to each datapoint to group them into chunks.
For example, we created a dataset with 100 common customer support questions. In the csv file, we added an identifier called “chunk” to each datapoint, splitting the whole dataset into 10 equal parts.
To upload this CSV on Humanloop, create a new Dataset file, then click on the “Upload CSV” button.
Alternatively, you can upload Dataset via our SDK
Split the workload between SMEs
To split the workload between your SMEs, navigate to the Review tab, turn on Focus mode, and click on the Filters button. Filter the dataset by identifiers, such as “chunk”, to split the review work into smaller pieces.
Improve the Prompt
With judgments from your SMEs, you can now better understand the model’s performance and iterate on your Prompt to improve the model outputs.
Next steps
- To troubleshoot your Prompts, see our guide on Compare and Debug Prompts.
- Explore Human Evaluators.