Create your first GPT-4 App

In this tutorial, you’ll use GPT-4 and Humanloop to quickly create a GPT-4 chat app that explains topics in the style of different experts.

At the end of this tutorial, you’ll have created your first GPT-4 app. You’ll also have learned how to:

  1. Create a Prompt
  2. Use the Humanloop SDK to call Open AI GPT-4 and log your results
  3. Capture feedback from your end users to evaluate and improve your model
In this tutorial, you'll build a simple GPT-4 app that can explain a topic in the style of different experts.

This tutorial picks up where the Quick Start left off. If you’ve already followed the quick start you can skip to step 4 below.

Create the Prompt

Get Started

Create a Prompt File

When you first open Humanloop you’ll see your File navigation on the left. Click ‘+ New’ and create a Prompt.

In the sidebar, rename this file to “Comedian Bot” now or later.

Create the Prompt template in the Editor

The left hand side of the screen defines your Prompt – the parameters such as model, temperature and template. The right hand side is a single chat session with this Prompt.

Click the “+ Message” button within the chat template to add a system message to the chat template.

Add the following templated message to the chat template.

You are a funny comedian. Write a joke about {{topic}}.

This message forms the chat template. It has an input slot called topic (surrounded by two curly brackets) for an input value that is provided each time you call this Prompt.

On the right hand side of the page, you’ll now see a box in the Inputs section for topic.

  1. Add a value for topic e.g. music, jogging, whatever
  2. Click Run in the bottom right of the page

This will call OpenAI’s model and return the assistant response. Feel free to try other values, the model is very funny.

You now have a first version of your prompt that you can use.

Commit your first version of this Prompt

  1. Click the Commit button
  2. Put “initial version” in the commit message field
  3. Click Commit

View the logs

Under the Prompt File, click ‘Logs’ to view all the generations from this Prompt

Click on a row to see the details of what version of the prompt generated it. From here you can give feedback to that generation, see performance metrics, open up this example in the Editor, or add this log to a dataset.

Call the Prompt in an app

Now that you’ve found a good prompt and settings, you’re ready to build the “Learn anything from anyone” app! We’ve written some code to get you started — follow the instructions below to download the code and run the app.

When you run the app, this is what you should see.


If you don’t have Python 3 installed, install it from here. Then download the code by cloning this repository in your terminal:

Python Tutorial
1git clone git@github.com:humanloop/humanloop-tutorial-python.git

If you prefer not to use git, you can alternatively download the code using this zip file.

In your terminal, navigate into the project directory and make a copy of the example environment variables file.

1cd humanloop-tutorial-python
2cp .example.env .env

Copy your Humanloop API key and set it as HUMANLOOP_API_KEY in your newly created .env file. Copy your OpenAI API key and set it as the OPENAI_API_KEY.

Run the app

Run the following commands in your terminal in the project directory to install the dependencies and run the app.

python -m venv venv
. venv/bin/activate
pip install -r requirements.txt
flask run

Open http://localhost:5000 in your browser and you should see the app. If you type in the name of an expert, e.g “Aristotle”, and a topic that they’re famous for, e.g “ethics”, the app will try to generate an explanation in their style.

Press the thumbs-up or thumbs-down buttons to register your feedback on whether the generation is any good.

Try a few more questions. Perhaps change the name of the expert and keep the topic fixed.

View the data on Humanloop

Now that you have a working app you can use Humanloop to measure and improve performance. Go back to the Humanloop app and go to your project named “learn-anything”.

On the Models dashboard you’ll be able to see how many data points have flowed through the app as well as how much feedback you’ve received. Click on your model in the table at the bottom of the page.

Click View data in the top right. Here you should be able to see each of your generations as well as the feedback that’s been logged against them. You can also add your own internal feedback by clicking on a datapoint in the table and using the feedback buttons.

Understand the code

Open up the file app.py in the “openai-quickstart-python” folder. There are a few key code snippets that will let you understand how the app works.

Between lines 30 and 41 you’ll see the following code.

1expert = request.form["Expert"]
2topic = request.form["Topic"]
4# hl.complete automatically logs the data to your project.
5complete_response = humanloop.complete_deployed(
6 project="learn-anything",
7 inputs={"expert": expert, "topic": topic},
8 provider_api_keys={"openai": OPENAI_API_KEY}
11data_id = complete_response.data[0].id
12result = complete_response.data[0].output

On line 34 you can see the call to humanloop.complete_deployed which takes the project name and project inputs as variables. humanloop.complete_deployed calls GPT-4 and also automatically logs your data to the Humanloop app.

In addition to returning the result of your model on line 39, you also get back a data_id which can be used for recording feedback about your generations.

On line 51 of app.py, you can see an example of logging feedback to Humanloop.

1# Send feedback to Humanloop
2humanloop.feedback(type="rating", value="good", data_id=data_id)

The call to humanloop.feedback uses the data_id returned above to associate a piece of positive feedback with that generation.

In this app there are two feedback groups rating (which can be good or bad) and actions, which here is the copy button and also indicates positive feedback from the user.

Add a new model config

If you experiment a bit, you might find that the model isn’t initially that good. The answers are often too short or not in the style of the expert being asked. We can try to improve this by experimenting with other prompts.

  1. Click on your model on the model dashboard and then in the top right, click Editor

  2. Edit the prompt template to try and improve the prompt. Try changing the maximum number of tokens using the Max tokens slider, or the wording of the prompt.

Here are some prompt ideas to try out. Which ones work better?

Transcript from lecture
1{{ expert }} recently gave a lecture on {{ topic }}. Here is a transcript of the
2most interesting section: ``` ```Text ELI10 If {{ expert }} explained {{
3 topic,
4}} to a 10 year old, they would likely say: ``` ``` Write an essay in the style
5of {{ expert }} on {{ topic }}
  1. Click Save to add the new model to your project. Add it to the “learn-anything” project.

  2. Go to your project dashboard. At the top left of the page, click menu of “production” environment card. Within that click the button Change deployment and set a new model config as active; calls to humanloop.complete_deployed will now use this new model. Now go back to the app and see the effect!


And that’s it! You should now have a full understanding of how to go from creating a Prompt in Humanloop to a deployed and functioning app. You’ve learned how to create prompt templates, capture user feedback and deploy a new models.

If you want to learn how to improve your model by running experiments or finetuning check out our guides below.