GuidesEvaluation and Monitoring

Evaluating externally generated Logs

In this guide, we'll demonstrate an evaluation run workflow where logs are generated outside the Humanloop environment and posted via API.

If you are running your own infrastructure to generate logs, you can still leverage the Humanloop evaluations suite via our API. The workflow looks like this:

  1. Trigger creation of an evaluation run
  2. Loop through the datapoints in your dataset and perform generations on your side
  3. Post the generated logs to the evaluation run

This works with any evaluator - if you have configured a Humanloop-runtime evaluator, these will be automatically run on each log you post to the evaluation run; or, you can use self-hosted evaluators and post the results to the evaluation run yourself (see Self-hosted evaluations).


  • You need to have access to evaluations
  • You also need to have a project created - if not, please first follow our project creation guides.
  • You need to have a dataset in your project. See our dataset creation guide if you don’t yet have one.
  • You need to have a model config that you’re trying to evaluate - create one in the Editor.

Setting up the script

Install the latest version of the Humanloop Python SDK

$pip install humanloop

In a new Python script, import the Humanloop SDK and create an instance of the client

1humanloop = Humanloop(
2 api_key=YOUR_API_KEY, # Replace with your Humanloop API key

Retrieve the ID of the Humanloop project you are working in

You can find this in the Humanloop app.

1PROJECT_ID = ... # Replace with the project ID

Retrieve the dataset you’re going to use for evaluation from the project

1# Retrieve a dataset
2DATASET_ID = ... # Replace with the dataset ID you are using for evaluation.
3 # This must be a dataset in the project you are working in.
4datapoints = humanloop.datasets.list_datapoints(DATASET_ID).records

Set up the model config you are evaluating

If you constructed this in Humanloop, retrieve by calling:

1config = humanloop.model_configs.get(id=CONFIG_ID)

Alternatively, if your model config lives outside the Humanloop system, you can post it to Humanloop with the register model config endpoint.

Either way, you need the ID of the config.


In the Humanloop app, create an evaluator

For this guide, we’ll simply create a Valid JSON checker.

  1. Visit the Evaluations tab, and select Evaluators
  2. Click + New Evaluator and choose Code from the options.
  3. Select the Valid JSON preset on the left.
  4. Choose the mode Offline in the setting panel on the left.
  5. Click Create.
  6. Copy your new evaluator’s ID from the address bar. It starts with evfn_.

Create an evaluation run with hl_generated set to False

This tells the Humanloop runtime that it should not trigger evaluations itself, but wait for them to be posted via the API.

1evaluation_run = humanloop.evaluations.create(
2 project_id=PROJECT_ID,
3 config_id=CONFIG_ID,
4 dataset_id=DATASET_ID,
5 evaluator_ids=[EVALUATOR_ID],
6 hl_generated=False,

By default, the status of the evaluation after creation is pending. Before sending the generation logs, set the status to running.

1humanloop.evaluations.update_status(, status="running")

Iterate through the datapoints in the dataset, produce a generation, and post it the evaluation

1for datapoint in datapoints:
2 # Use the datapoint to produce a log with the model config you are testing.
3 # This will depend on whatever model calling setup you are using on your side.
4 # For simplicity, we simply log a hardcoded
5 log = {
6 "project_id": PROJECT_ID,
7 "config_id": CONFIG_ID,
8 "messages": [*config.chat_template, *datapoint.messages],
9 "output": "Hello World!",
10 }
12 print(f"Logging generation for datapoint {}")
13 humanloop.evaluations.log(
15 log=log,
17 )

Run the full script above.

If everything goes well, you should now have posted a new evaluation run to Humanloop, and logged all the generations derived from the underlying datapoints.

The Humanloop evaluation runtime will now iterate through those logs and run the Valid JSON evaluator on each of them. To check progress:

Visit your project in the Humanloop app and go to the Evaluations tab.

You should see the run you recently created; click through to it and you’ll see rows in the table showing the generations.

In this case, all the evaluations returned False because the string “Hello World!” wasn’t valid JSON. Try logging something which is valid JSON to check that everything works as expected.

Full Script

For reference, here’s the full script you can use to get started quickly.

1from humanloop import Humanloop
5humanloop = Humanloop(
6 api_key=API_KEY,
14# Retrieve the datapoints in the dataset.
15datapoints = humanloop.datasets.list_datapoints(dataset_id=DATASET_ID).records
17# Retrieve the model config
18config = humanloop.model_configs.get(id=CONFIG_ID)
20# Create the evaluation run
21evaluation_run = humanloop.evaluations.create(
22 project_id=PROJECT_ID,
23 config_id=CONFIG_ID,
24 dataset_id=DATASET_ID,
25 evaluator_ids=[EVALUATOR_ID],
26 hl_generated=False,
28print(f"Started evaluation run {}")
30# Set the status of the run to running.
31humanloop.evaluations.update_status(, status="running")
33# Iterate the datapoints and log a generation for each one.
34for i, datapoint in enumerate(datapoints):
35 # Produce the log somehow. This is up to you and your external setup!
36 log = {
37 "project_id": PROJECT_ID,
38 "config_id": CONFIG_ID,
39 "messages": [*config.chat_template, *datapoint.messages],
40 "output": "Hello World!", # Hardcoded example for demonstration..
41 }
43 print(f"Logging generation for datapoint {}")
44 humanloop.evaluations.log(
46 log=log,
48 )
50print(f"Completed evaluation run {}")

It’s also a good practice to wrap the above code in a try-except block and to mark the evaluation run as failed (using update_status) if an exception causes something to fail.