Add Humanloop observability to your Vercel AI SDK project.

Add Humanloop observability to a chat agent by calling the tool built with Vercel AI SDK. It builds on the AI SDK’s Node.js example.

Looking for Next.js? See the guide here.

Prerequisites

Create a Humanloop Account

  1. Create an account or log in to Humanloop

  2. Get a Humanloop API key from Organization Settings.

Add an OpenAI API Key

If you’re the first person in your organization, you’ll need to add an API key to a model provider.

  1. Go to OpenAI and grab an API key.
  2. In Humanloop Organization Settings set up OpenAI as a model provider.

Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.

To follow this guide, you’ll also need Node.js 18+ installed on your machine.

Install humanloop, ai, and @ai-sdk/openai, the AI SDK’s OpenAI provider, along with other necessary dependencies.

$npm install humanloop ai @ai-sdk/openai zod dotenv
>npm install -D @types/node tsx typescript

Add a .env file to your project with your Humanloop and OpenAI API keys.

$touch .env
.env
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
OPENAI_API_KEY=<YOUR_OPENAI_KEY>

Full code

If you’d like to immediately try out the full example, you can copy and paste the code below and run the file.

$npm install humanloop ai @ai-sdk/openai zod dotenv
>npm install -D @types/node tsx typescript
>npm install dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node
.env
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
OPENAI_API_KEY=<YOUR_OPENAI_KEY>
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=<YOUR_HUMANLOOP_KEY>" # Humanloop API key
agent.ts
1import { openai } from "@ai-sdk/openai";
2import { CoreMessage, streamText, tool } from "ai";
3import { z } from "zod";
4import * as readline from "node:readline/promises";
5
6import { NodeSDK } from "@opentelemetry/sdk-node";
7import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
8import dotenv from "dotenv";
9
10dotenv.config();
11
12const sdk = new NodeSDK({
13 instrumentations: [getNodeAutoInstrumentations()],
14});
15
16sdk.start();
17
18async function exit() {
19 console.log("Assistant: Shutting down...");
20 await sdk.shutdown();
21 process.exit(0);
22}
23
24const terminal = readline.createInterface({
25 input: process.stdin,
26 output: process.stdout,
27});
28
29const messages: CoreMessage[] = [
30 {
31 role: "system",
32 content:
33 "You are a helpful assistant. If the user asks you to exit, you should exit the program.",
34 },
35];
36
37async function main() {
38 while (true) {
39 const userInput = await terminal.question("You: ");
40
41 if (userInput === "exit") {
42 break;
43 }
44
45 messages.push({ role: "user", content: userInput });
46
47 const result = streamText({
48 model: openai("gpt-4o"),
49 messages,
50 maxSteps: 5,
51 experimental_telemetry: {
52 isEnabled: true,
53 metadata: {
54 humanloopFlowPath: "Vercel AI/Weather Agent",
55 humanloopPromptPath: "Vercel AI/Call Assistant",
56 },
57 },
58 tools: {
59 weather: tool({
60 description: "Get the weather in a location (in Celsius)",
61 parameters: z.object({
62 location: z
63 .string()
64 .describe("The location to get the weather for"),
65 }),
66 execute: async ({ location }) => ({
67 location,
68 temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
69 }),
70 }),
71 },
72 });
73
74 let fullResponse = "";
75 process.stdout.write("\nAssistant: ");
76 for await (const delta of result.textStream) {
77 fullResponse += delta;
78 process.stdout.write(delta);
79 }
80 process.stdout.write("\n\n");
81
82 messages.push({ role: "assistant", content: fullResponse });
83 }
84
85 await exit();
86}
87
88main().catch(console.error);
$npx tsx agent.ts

Create the agent

We start with a simple chat agent capable of function calling.

agent.ts
1import { openai } from "@ai-sdk/openai";
2import { CoreMessage, streamText, tool } from "ai";
3import { z } from "zod";
4import * as readline from "node:readline/promises";
5
6async function exit() {
7 console.log("Assistant: Shutting down...");
8 process.exit(0);
9}
10
11const terminal = readline.createInterface({
12 input: process.stdin,
13 output: process.stdout,
14});
15
16const messages: CoreMessage[] = [
17 {
18 role: "system",
19 content:
20 "You are a helpful assistant. If the user asks you to exit, you should exit the program.",
21 },
22];
23
24async function main() {
25 while (true) {
26 const userInput = await terminal.question("You: ");
27
28 if (userInput === "exit") {
29 break;
30 }
31
32 messages.push({ role: "user", content: userInput });
33
34 const result = streamText({
35 model: openai("gpt-4o"),
36 messages,
37 maxSteps: 5,
38 tools: {
39 weather: tool({
40 description: "Get the weather in a location (in Celsius)",
41 parameters: z.object({
42 location: z
43 .string()
44 .describe("The location to get the weather for"),
45 }),
46 execute: async ({ location }) => ({
47 location,
48 temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
49 }),
50 }),
51 },
52 });
53
54 let fullResponse = "";
55 process.stdout.write("\nAssistant: ");
56 for await (const delta of result.textStream) {
57 fullResponse += delta;
58 process.stdout.write(delta);
59 }
60 process.stdout.write("\n\n");
61
62 messages.push({ role: "assistant", content: fullResponse });
63 }
64
65 await exit();
66}
67
68main().catch(console.error);

This agent can provide weather updates for a user-provided location.

$ npx tsx agent.ts
You: What's the weather like in London?
Assistant: The current temperature in London is 20°C.
You: exit
Assistant: Shutting down...

Log to Humanloop

The agent works and is capable of function calling. However, we rely on inputs and outputs to reason about the behavior. Humanloop logging allows you to observe the steps taken by the agent, which we will demonstrate below.

We’ll use Vercel AI SDK’s built-in OpenTelemetry tracing to log to Humanloop.

1

Set up OpenTelemetry

Install dependencies.

$npm install dotenv @opentelemetry/sdk-node @opentelemetry/auto-instrumentations-node

Configure the OpenTelemetry exporter to forward logs to Humanloop.

.env
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
OPENAI_API_KEY=<YOUR_OPENAI_KEY>
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=<YOUR_HUMANLOOP_KEY>" # Humanloop API key
2

Trace AI SDK calls

Vercel AI SDK will now forward OpenTelemetry logs to Humanloop.

The telemetry metadata associates Logs with your Files on Humanloop. The humanloopPromptPath provides the path for a Prompt on Humanloop, and the humanloopFlowPath provides the path for a Flow to group the related Prompt Logs together.

agent.ts
1import { openai } from "@ai-sdk/openai";
2import { CoreMessage, streamText, tool } from "ai";
3import { z } from "zod";
4import * as readline from "node:readline/promises";
5
6import { NodeSDK } from "@opentelemetry/sdk-node";
7import { getNodeAutoInstrumentations } from "@opentelemetry/auto-instrumentations-node";
8import dotenv from "dotenv";
9
10dotenv.config();
11
12const sdk = new NodeSDK({
13 instrumentations: [getNodeAutoInstrumentations()],
14});
15
16sdk.start();
17
18async function exit() {
19 console.log("Assistant: Shutting down...");
20 await sdk.shutdown();
21 process.exit(0);
22}
23
24const terminal = readline.createInterface({
25 input: process.stdin,
26 output: process.stdout,
27});
28
29const messages: CoreMessage[] = [
30 {
31 role: "system",
32 content:
33 "You are a helpful assistant. If the user asks you to exit, you should exit the program.",
34 },
35];
36
37async function main() {
38 while (true) {
39 const userInput = await terminal.question("You: ");
40
41 if (userInput === "exit") {
42 break;
43 }
44
45 messages.push({ role: "user", content: userInput });
46
47 const result = streamText({
48 model: openai("gpt-4o"),
49 messages,
50 maxSteps: 5,
51 experimental_telemetry: {
52 isEnabled: true,
53 metadata: {
54 humanloopFlowPath: "Vercel AI/Weather Agent",
55 humanloopPromptPath: "Vercel AI/Call Assistant",
56 },
57 },
58 tools: {
59 weather: tool({
60 description: "Get the weather in a location (in Celsius)",
61 parameters: z.object({
62 location: z
63 .string()
64 .describe("The location to get the weather for"),
65 }),
66 execute: async ({ location }) => ({
67 location,
68 temperature: Math.round((Math.random() * 30 + 5) * 10) / 10, // Random temp between 5°C and 35°C
69 }),
70 }),
71 },
72 });
73
74 let fullResponse = "";
75 process.stdout.write("\nAssistant: ");
76 for await (const delta of result.textStream) {
77 fullResponse += delta;
78 process.stdout.write(delta);
79 }
80 process.stdout.write("\n\n");
81
82 messages.push({ role: "assistant", content: fullResponse });
83 }
84
85 await exit();
86}
87
88main().catch(console.error);
3

Run the agent

$npx tsx agent.ts

Have a conversation with the agent, and try asking about the weather in a city (in Celsius or Fahrenheit). When you’re done, type exit to close the program.

4

Explore logs on Humanloop

Now you can explore your logs on the Humanloop platform, and see the steps taken by the agent during your conversation.

You can see below the full trace of prompts and tool calls that were made.

The trace captures the agent calling its tools to answer the user's question.

Debugging

If you run into any issues, add OpenTelemetry debug logging to ensure your Exporter is working correctly.

$pnpm add @opentelemetry/api
agent.ts
1import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
2
3diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);

Next steps

Logging is the first step to observing your AI product. Read these guides to learn more about evals on Humanloop:

Built with