A guide on calling large language model providers (OpenAI, Anthropic, Google etc.) through the Humanloop API

This guide walks you through how to call various models through the Humanloop API. This is the same as calling a Prompt but instead of using a version of the Prompt that is defined in Humanloop, you’re setting the template and parameters directly in code.

The benefits of using the Humanloop proxy are:

  • consistent interface across different AI providers: OpenAI, Anthropic, Google and more – see the full list of supported models
  • all your requests are logged automatically
  • creates versions of your Prompts automatically, so you can track performance over time
  • can call multiple providers while managing API keys centrally (you can also supply keys at runtime)

In this guide, we’ll cover how to call LLMs using the Humanloop proxy.

Call the LLM with a prompt that you’re defining in code

Prerequisites

1

Use the SDK to call your model

Now you can use the SDK to generate completions and log the results to your Prompt using the new prompt.call() method:

🎉 Now that you have chat messages flowing through your Prompt you can start to log your end user feedback to evaluate and improve your models.