Logging from Vercel AI SDK
Instrument your Vercel AI SDK project with Humanloop.
The Vercel AI SDK is the TypeScript toolkit designed to help developers build AI-powered applications with React, Next.js, Vue, Svelte, Node.js, and more.
The AI SDK supports OpenTelemetry tracing, and you can collect your traces in Humanloop with a few simple steps.
This guide extends Vercel AI SDK’s Node.js example, adding Humanloop logging to a chat agent with tool calling.
Prerequisites
Account setup
Create a Humanloop Account
If you haven’t already, create an account or log in to Humanloop
Add an OpenAI API Key
If you’re the first person in your organization, you’ll need to add an API key to a model provider.
- Go to OpenAI and grab an API key.
- In Humanloop Organization Settings set up OpenAI as a model provider.
Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.
Install dependencies
To follow this guide, you’ll also need Node.js 18+ installed on your machine.
Install humanloop
, ai
, and @ai-sdk/openai
, the AI SDK’s OpenAI provider, along with other necessary dependencies.
Configure API keys
Add a .env
file to your project with your Humanloop and OpenAI API keys.
Create the chat agent
We start with a simple chat agent that talks back to the user and uses a tool to get the weather in a given city. The full code is available at the bottom of this page.
This agent can call functions to get the weather in a given city, convert to Fahrenheit, and exit the conversation.

Log to Humanloop
The agent works and is capable of function calling. However, we rely on inputs and outputs to reason about the behavior. Humanloop logging allows you to observe the steps taken by the agent, which we will demonstrate below.
We’ll use Vercel AI SDK’s built-in OpenTelemetry tracing to log to Humanloop.
Configure the OpenTelemetry OTLP Exporter options
Add the following lines to your .env
file to configure the OpenTelemetry OTLP Exporter.
Run the code
Have a conversation with the agent, and try asking about the weather in a city (in Celsius or Fahrenheit). When you’re done, type exit
to close the program.
Explore your logs on Humanloop
Now you can explore your logs on the Humanloop platform, and see the steps taken by the agent during your conversation.
You can see below the full trace of prompts and tool calls that were made.

Next steps
Logging is the first step to observing your AI product. Read these guides to learn more about evals on Humanloop:
-
Add monitoring Evaluators to evaluate Logs as they’re made against a File.
-
See evals in action in our tutorial on evaluating an agent.