Vercel AI SDK

How to integrate Humanloop with the Vercel AI SDK

Observability integration

The Vercel AI SDK supports tracing via OpenTelemetry. You can export these traces to Humanloop by enabling telemetry and configuring the OpenTelemetry Exporter.

The Vercel AI SDK tracing feature is experimental and subject to change. You must enable it with the experimental_telemetry parameter on each AI SDK function call that you want to trace.

Learn how to add tracing to your AI SDK application below.

Metadata parameters

Humanloop’s AI SDK OpenTelemetry Receiver will automatically extract the following metadata parameters from the experimental_telemetry metadata object:

  • humanloop.directoryPath: [Required] The path to the directory on Humanloop. Generation spans will create Logs for this Directory on Humanloop.
  • humanloop.traceId: [Optional] The ID of a Flow Log on Humanloop. Set this to group multiple calls to the AI SDK into a single Flow Log on Humanloop.

Prerequisites

The following steps assume you’re already using the AI SDK in your application. If not, follow Vercel’s quickstarts to get started.

Versions of Next < 15 must set experimental.instrumentationHook in next.config.js. Learn more here.

You can find an example Next.js application that uses the AI SDK to stream chat responses here.

1

Set up OpenTelemetry

Install dependencies.

$npm install @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation

Create a file called instrumentation.ts in your root or /src directory and add the following:

instrumentation.ts
1import { registerOTel } from '@vercel/otel';
2
3export function register() {
4 registerOTel({
5 serviceName: 'humanloop-vercel-ai-sdk'
6 });
7}
2

Configure OpenTelemetry

Configure the OpenTelemetry exporter to forward logs to Humanloop.

.env.local
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
# Configure the OpenTelemetry OTLP Exporter
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=<YOUR_HUMANLOOP_KEY>" # Humanloop API key
3

Trace AI SDK calls

Now add the experimental_telemetry parameter to your AI SDK function calls to trace them.

With a simple one-step generation, each call to streamText or generateText will be traced as a Prompt Log on Humanloop.

app/api/chat/route.ts
1import { openai } from '@ai-sdk/openai';
2import { streamText } from 'ai';
3
4// Allow streaming responses up to 30 seconds
5export const maxDuration = 30;
6
7export async function POST(req: Request) {
8 const { messages, id } = await req.json();
9
10 const result = streamText({
11 model: openai('gpt-4o'),
12 messages,
13 experimental_telemetry: {
14 isEnabled: true,
15 metadata: {
16 "humanloop.directoryPath": "path/to/directory",
17 },
18 },
19 });
20
21 // Respond with the stream
22 return result.toDataStreamResponse();
23}

You can also group each step of a multi-step generation into a Flow by passing the humanloopFlowPath metadata value.

app/api/chat/route.ts
1import { openai } from '@ai-sdk/openai';
2import { streamText } from 'ai';
3
4// Allow streaming responses up to 30 seconds
5export const maxDuration = 30;
6
7export async function POST(req: Request) {
8 const { messages, id } = await req.json();
9
10 const result = streamText({
11 model: openai('gpt-4o'),
12 messages,
13 maxSteps: 3,
14 toolCallStreaming: true,
15 system: "You are a helpful assistant that answers questions about the weather in a given city.",
16 experimental_telemetry: {
17 isEnabled: true,
18 metadata: {
19 "humanloop.directoryPath": "path/to/directory",
20 }
21 },
22 tools: {
23 getWeatherInformation: {
24 description: 'show the weather in a given city to the user',
25 parameters: z.object({ city: z.string() }),
26 execute: async ({}: { city: string }) => {
27 const weatherOptions = ['sunny', 'cloudy', 'rainy', 'snowy', 'windy'];
28 return {
29 weather:
30 weatherOptions[Math.floor(Math.random() * weatherOptions.length)],
31 temperature: Math.floor(Math.random() * 50 - 10),
32 };
33 }
34 },
35 },
36 });
37
38 // Respond with the stream
39 return result.toDataStreamResponse();
40}

Learn more

To see the integration in action, check out our Vercel AI SDK guides.