A guide on how to call your Prompts that are managed on Humanloop.

This guide will show you how to call your Prompts as an API, enabling you to generate responses from the large language model that uses the versioned template and parameters. If you want to call an LLM with a prompt that you’re defining in code follow the guide on Calling a LLM through the Humanloop Proxy.

Call an existing Prompt

Prerequisites

Before you can use the new prompt.call() method, you need to have a Prompt. If you don’t have one, please follow our Prompt creation guide first.

1

Get the Prompt ID

In Humanloop, navigate to the Prompt and copy the Prompt ID by clicking on the ID in the top right corner of the screen.

2

Use the SDK to call your model

Now you can use the SDK to generate completions and log the results to your Prompt using the new prompt.call() method:

Call the LLM with a prompt that you’re defining in code

🎉 Now that you have chat messages flowing through your Prompt you can start to log your end user feedback to evaluate and improve your models.