Skip to main content
You can trace Vercel AI SDK calls in Weave using OpenTelemetry (OTel). The Vercel AI SDK is a TypeScript toolkit for building AI-powered applications with framework support for Next.js, Nuxt, SvelteKit, and other frameworks. It has built-in OpenTelemetry support through its experimental_telemetry option. This guide shows you how to configure OTel to send traces from the Vercel AI SDK to Weave. You can use the AI SDK with Next.js or as a standalone Node.js application. For more information on OTel tracing in Weave, see Send OTel traces to Weave.

Prerequisites

Both of the Next.js and Node.js examples in this guide require the same dependencies. To start:
  1. Install the following Vercel and OTel libraries:
    npm install ai @ai-sdk/openai @opentelemetry/api @opentelemetry/sdk-trace-node @opentelemetry/sdk-trace-base @opentelemetry/exporter-trace-otlp-proto @opentelemetry/resources zod
    
  2. Set the following environment variables:
    export WANDB_API_KEY="your-wandb-api-key"
    export OPENAI_API_KEY="your-openai-api-key"
    
    You can get your W&B API key from User Settings.

Configure OTel tracing for Next.js

This section demonstrates how to configure Weave in a Next.js app. The example does not contain a full app, just the instrumentation configuration and how to invoke the instrumentation on a Vercel AI SDK function, in this a case, a simple call to OpenAI.

Configure instrumentation

Next.js applications use an instrumentation.ts file to set up OTel. This file runs once when the server starts and configures the tracer provider that the AI SDK uses. To integrate Weave with Vercelโ€™s OTeLโ€™s functionality, create an instrumentation.ts file in your project root and add the following code to it, updating the resourceFromAttributes() function with your team and project names:
instrumentation.ts
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";

export function register() {
  const WANDB_API_KEY = process.env.WANDB_API_KEY!;

// Configure OTel exporter to use W&B Weave endpoint.
  const exporter = new OTLPTraceExporter({
    url: "https://trace.wandb.ai/otel/v1/traces",
    headers: { "wandb-api-key": WANDB_API_KEY },
  });

// Configure your W&B credentials.
  const provider = new NodeTracerProvider({
    resource: resourceFromAttributes({
      "wandb.entity": "<your-team-name>",
      "wandb.project": "<your-project-name>",
    }),
    spanProcessors: [new BatchSpanProcessor(exporter)],
  });

  provider.register();
}
This creates an OTLP exporter configured to send trace data to Weaveโ€™s OTel endpoint, authenticating with your W&B API key.

Configure telemetry on a function

Once youโ€™ve added the instrumentation, use Vercelโ€™s experimental_telemetry option on any AI SDK function call to emit OTel spans:
route.ts
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

export async function POST(req: Request) {
  const { prompt } = await req.json();

// Add experiment_telemetry field to function.
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    prompt,
    experimental_telemetry: { isEnabled: true },
  });

  return Response.json({ text: result.text });
}
All generateText calls with telemetry enabled produce OTel spans that are exported to Weave.

Configure OTel tracing for Node.js

For standalone Node.js applications (without Next.js), configure the tracer provider at the top of your entry file before any AI SDK calls. After meeting the prerequisites, you can run this example and generate spans without any additional configuration.
test-app.ts
import { NodeTracerProvider } from "@opentelemetry/sdk-trace-node";
import { BatchSpanProcessor } from "@opentelemetry/sdk-trace-base";
import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-proto";
import { resourceFromAttributes } from "@opentelemetry/resources";
import { openai } from "@ai-sdk/openai";
import { generateText } from "ai";

const WANDB_API_KEY = process.env.WANDB_API_KEY!;

// Configure OTel exporter to use W&B Weave endpoint.
const exporter = new OTLPTraceExporter({
  url: "https://trace.wandb.ai/otel/v1/traces",
  headers: { "wandb-api-key": WANDB_API_KEY },
});

// Configure your W&B credentials.
const provider = new NodeTracerProvider({
  resource: resourceFromAttributes({
    "wandb.entity": "<your-team-name>",
    "wandb.project": "<your-project-name>",
  }),
  spanProcessors: [new BatchSpanProcessor(exporter)],
});

provider.register();

// Add experiment_telemetry field to function.
async function main() {
  const result = await generateText({
    model: openai("gpt-4o-mini"),
    prompt: "Explain OpenTelemetry in one sentence.",
    experimental_telemetry: { isEnabled: true },
  });

  console.log(result.text);

  await provider.shutdown();
}

main();
BatchSpanProcessor flushes spans asynchronously. In short-lived processes like standalone scripts, serverless functions, or CLI tools, call provider.shutdown() before the process exits to ensure all spans are sent to Weave. For long-running servers (like a Next.js dev server started through instrumentation.ts), this is not necessary.