Self-hosted evaluations
In this guide, we'll show how to run an evaluation in your own infrastructure and post the results to Humanloop.
For some use cases, you may wish to run your evaluation process outside of Humanloop, as opposed to running the evaluators we offer in our Humanloop runtime.
For example, you may have implemented an evaluator that uses your own custom model or which has to interact with multiple systems. In these cases, you can continue to leverage the datasets you have curated on Humanloop, as well as consolidate all of the results alongside the prompts you maintain in Humanloop.
In this guide, we’ll show an example of setting up a simple script to run such a self-hosted evaluation using our Python SDK.
Prerequisites
- You need to have access to evaluations
- You also need to have a Prompt – if not, please follow our Prompt creation guide.
- You need to have a dataset in your project. See our dataset creation guide if you don’t yet have one.
- You need to have a model config that you’re trying to evaluate - create one in the Editor.
Setting up the script
Install the latest version of the Humanloop Python SDK:
In a new Python script, import the Humanloop SDK and create an instance of the client:
Retrieve the ID of the Humanloop project you are working in - you can find this in the Humanloop app
Retrieve the dataset you’re going to use for evaluation from the project
Create an external evaluator
Retrieve the model config you’re evaluating
Initiate an evaluation run in Humanloop
After this step, you’ll see a new run in the Humanloop app, under the Evaluations tab of your project. It should have status running.
Iterate through the datapoints in your dataset and use the model config to generate logs from them
Evaluate the logs using your own evaluation logic and post the results back to Humanloop
In this example, we use an extremely simple evaluation function for clarity.
Mark the evaluation run as completed
Review the results
After running this script with the appropriate resource IDs (project, dataset, model config), you should see the results in the Humanloop app, right alongside any other evaluations you have performed using the Humanloop runtime.
![](https://fdr-prod-docs-files-public.s3.amazonaws.com/https://humanloop.docs.buildwithfern.com/docs/2024-07-19T19:46:40.702Z/assets/images/f5e8663-image.png)