Capture user feedback
Record end-user feedback using Humanloop; monitor how your model generations perform with your users.
Prerequisites
- You already have a Prompt — if not, please follow our Prompt creation guide first.
- You have created a Human Evaluator. For this guide, we will use the “rating” example Evaluator automatically created for your organization.
Configure feedback
To collect user feedback, connect a Human Evaluator to your Prompt. The Evaluator specifies the type of the feedback you want to collect. See our guide on creating Human Evaluators for more information.
You can use the example “rating” Evaluator that is automatically for you. This Evaluator allows users to apply a label of “good” or “bad”, and is automatically connected to all new Prompts. If you choose to use this Evaluator, you can skip to the “Log feedback” section.
You should now see the selected Human Evaluator attached to the Prompt in the Monitoring dialog.
Log feedback
With the Human Evaluator attached to your Prompt, you can record feedback against the Prompt’s Logs.
View feedback
You can view the applied feedback in two main ways: through the Logs that the feedback was applied to, and through the Evaluator itself.
Feedback applied to Logs
The feedback recorded for each Log can be viewed in the Logs table of your Prompt.
Your internal users can also apply feedback to the Logs directly through the Humanloop app.
Feedback for an Evaluator
You can view all feedback recorded for a specific Human Evaluator in the Logs tab of the Evaluator. This will display all feedback recorded for the Evaluator across all other Files.
Next steps
- Create and customize your own Human Evaluators to capture the feedback you need.
- Human Evaluators can also be used in Evaluations, allowing you to collect judgments from your subject-matter experts.