GuidesGenerate and Log

Capture user feedback

You can record feedback on generations from your users using the Humanloop Python SDK. This allows you to monitor how your generations perform with your users.

This guide shows how to use the Humanloop SDK to record user feedback on datapoints. This works equivalently for both the completion and chat APIs.


  • You already have a Prompt — if not, please follow our Prompt creation guide first.
  1. Already have integrated or humanloop.complete() to log generations with the Python or TypeScript SDKs. If not, follow our guide to integrating the SDK.

Record feedback with the datapoint ID

  1. Extract the data ID from the humanloop.complete_deployed() response.

    1complete_response = humanloop.complete_deployed(
    2 project="<YOUR UNIQUE PROJECT NAME>",
    3 inputs={"question": "How should I think about competition for my startup?"},
    6data_id =[0].id
  2. Call referencing the saved datapoint ID to record user feedback.
    You can also include the source of the feedback when recording it.

    # You can capture a single piece feedback, type="rating", value="good")
    # And you can associate the feedback to a specific user., type="rating", value="good", user="user_123456")

The feedback recorded for each datapoint can be viewed in the Logs tab of your project.

Different use cases and user interfaces may require different kinds of feedback that need to be mapped to the appropriate end user interaction. There are broadly 3 important kinds of feedback:

  1. Explicit feedback: these are purposeful actions to review the generations. For example, ‘thumbs up/down’ button presses.
  2. Implicit feedback: indirect actions taken by your users may signal whether the generation was good or bad, for example, whether the user ‘copied’ the generation, ‘saved it’ or ‘dismissed it’ (which is negative feedback).
  3. Free-form feedback: Corrections and explanations provided by the end-user on the generation.

Recording corrections as feedback

It can also be useful to allow your users to correct the outputs of your model. This is strong feedback signal and can also be considered as ground truth data for finetuning later.

1# You can capture text based feedback to record corrections, type="correction", value="A user provided completion...")
4# And also include this as part of an array of feedback for a logged datapoint[
6 {"data_id": data_id, "type": "rating", "value": "bad"},
7 {"data_id": data_id, "type": "correction", "value": "A user provided summary..."},

This feedback will also show up within Humanloop, where your internal users can also provide feedback and corrections on logged data to help with evaluation.