Capture user feedback
Collect feedback from your users to improve your AI product.
In this tutorial, we’ll show how you can gather valuable insights from your users to evaluate and improve your AI product.
We’ll deploy a simple chat app that allows users to interact with an AI model. Later, we’ll modify the source code to capture user feedback and show how these insights are used to improve the AI product.
Prerequisites
Capture user feedback
You can grab the source code used in this tutorial here: hl-chatgpt-clone-typescript
Use the chat app
Open the chat app in your browser and start chatting with the AI model.
Every time the user presses the Send button, Humanloop receives the request and calls the AI model. The response from the model is then stored as a Log.
Let’s check the api/chat/route.ts
file to see how it works.
- The
path
parameter is the path to the Prompt in the Humanloop workspace. If the Prompt doesn’t exist, it will be created. - The
prompt
parameter is the configuration of the Prompt. In this case we manage our Prompt in code; if the configuration of the Prompt changes, a new version of the Prompt will automatically be created on Humanloop. Prompts can alternatively be managed directly on Humanloop. - The
messages
parameter is the list of all messages exchanged between the Model and the User.
To learn more about calling Prompts with the Humanloop SDK, see the Prompt Call API reference.
Review the logs in Humanloop
After chatting with the AI model, go to the Humanloop app and review the logs.
Click on the chatgpt-clone-tutorial/customer-support-agent
Prompt, then click on the Logs tab at the top of the page.
You see that all the interactions with the AI model are logged here.
The code will generate a new Prompt chatgpt-clone-tutorial/customer-support-agent
in the Humanloop app.
To change the path, modify the variable PROMPT_HUMANLOOP_PATH
in the api/chat/route.ts
file.
Modify the code to capture user feedback
Now, let’s modify the code to start getting user feedback!
Go back to the code editor and uncomment lines 174-193 in the page.tsx
file.
This snippet will add 👍 and 👎 buttons, that users can press to give feedback on the model’s responses.
To understand how the feedback is captured and sent to Humanloop, let’s check the api/feedback/route.ts
file.
We use Humanloop TypeScript SDK to make calls to Humanloop. To attach user feedback, we only need three parameters:
parentId
is the Id of the Log to which we want to attach feedback. Thepage.txs
file stores all log Ids for model responses.path
is the path to the Evaluator. In this example, we’re using an example ‘rating’ Evaluator.judgment
is the user feedback.
Use the logs to improve your AI product
After you collect enough data, you can leverage the user feedback to improve your AI product.
Navigate back to the Logs view and filter all Logs that have a ‘bad’ rating to review the model’s responses that need improvement.
Click on Log and then on Editor -> button in the top right corner to open the Prompt Editor. In the Prompt Editor, you can make changes to the instructions and the model’s parameters to improve the model’s performance.
Once you’re happy with the changes, deploy the new version of the Prompt.
When users start interacting with the new version, compare the “good” to “bad” ratio to see if the changes have improved your users’ experience.
Next steps
Now that you’ve successfully captured user feedback, you can explore more ways to improve your AI product:
- If you found that your Prompt doesn’t perform well, see our guide on Comparing and Debugging Prompts.
- Leverage Code, AI and Human Evaluators to continuously monitor and improve your AI product.