Capture user feedback

Record end-user feedback using Humanloop; monitor how your model generations perform with your users.

Prerequisites

  • You already have a Prompt — if not, please follow our Prompt creation guide first.
  • You have created a Human Evaluator. For this guide, we will use the “rating” example Evaluator automatically created for your organization.

Configure feedback

To collect user feedback, connect a Human Evaluator to your Prompt. The Evaluator specifies the type of the feedback you want to collect. See our guide on creating Human Evaluators for more information.

You can use the example “rating” Evaluator that is automatically for you. This Evaluator allows users to apply a label of “good” or “bad”, and is automatically connected to all new Prompts. If you choose to use this Evaluator, you can skip to the “Log feedback” section.

1

Open the Prompt’s monitoring dialog

Go to your Prompt’s dashboard. Click Monitoring in the top right to open the monitoring dialog.

Prompt dashboard showing Monitoring dialog

2

Connect your Evaluator

Click Connect Evaluators and select the Human Evaluator you created.

Dialog connecting the "Tweet Issues" Evaluator as a Monitoring Evaluator

You should now see the selected Human Evaluator attached to the Prompt in the Monitoring dialog.

Monitoring dialog showing the "Tweet Issues" Evaluator attached to the Prompt

Log feedback

With the Human Evaluator attached to your Prompt, you can record feedback against the Prompt’s Logs.

1

Retrieve Log ID

The ID of the Prompt Log can be found in the response of the humanloop.prompts.call(...) method.

1log = humanloop.prompts.call(
2 version_id="prv_qNeXZp9P6T7kdnMIBHIOV",
3 path="persona",
4 messages=[{"role": "user", "content": "What really happened at Roswell?"}],
5 inputs={"person": "Trump"},
6)
7log_id = log.id
2

Log the feedback

Call humanloop.evaluators.log(...) referencing the above Log ID as parent_id to record user feedback.

1feedback = humanloop.evaluators.log(
2 # Pass the `log_id` from the previous step to indicate the Log to record feedback against
3 parent_id=log_id,
4 # Here, we're recording feedback against a "Tweet Issues" Human Evaluator,
5 # which is of type `multi_select` and has multiple options to choose from.
6 path="Feedback Demo/Tweet Issues",
7 judgment=["Inappropriate", "Too many emojis"],
8)

View feedback

You can view the applied feedback in two main ways: through the Logs that the feedback was applied to, and through the Evaluator itself.

Feedback applied to Logs

The feedback recorded for each Log can be viewed in the Logs table of your Prompt.

Logs table showing feedback applied to Logs

Your internal users can also apply feedback to the Logs directly through the Humanloop app.

Log drawer showing feedback section

Feedback for an Evaluator

You can view all feedback recorded for a specific Human Evaluator in the Logs tab of the Evaluator. This will display all feedback recorded for the Evaluator across all other Files.

Logs table for "Tweet Issues" Evaluator showing feedback

Next steps