AI SDK
Capture token usage, tool calls, model info, and streaming metrics from the Vercel AI SDK into wide events. Wrap your model and get full AI observability with one line.

evlog/ai gives you full AI observability by wrapping your model with middleware. Token usage, tool calls, streaming performance, cache hits, reasoning tokens, and cost estimation — all captured into the wide event automatically.

Add AI observability with evlog

Install

Add the AI SDK as a dependency:

pnpm add ai

Quick Start

ai sdk · streamText·idle
call timeline
  1. streamText()

    model wrapped, request sent

  2. first chunk

    msToFirstChunk: 234

  3. streaming…

    output tokens flowing

  4. tool: getWeather

    paris

  5. tool result

    22°C, sunny

  6. streaming…

    final answer being generated

  7. stream finish

    finishReason: stop · msToFinish: 4500

wide event · ai field
{
model:"claude-sonnet-4.6"
provider:"anthropic"
inputTokens:0
outputTokens:0
totalTokens:0
reasoningTokens:225
tools:[]
msToFirstChunk:234
msToFinish:4500
tokensPerSecond:0
finishReason:"stop"
estimatedCost:$0.0000
}
tokens0
tools0
duration
cost$0.0000

Two lines to add, one param to change:

export default defineEventHandler(async (event) => {
  const result = streamText({
    model: 'anthropic/claude-sonnet-4.6',
    messages,
  })
  return result.toTextStreamResponse()
})

Your wide event now includes:

Wide Event
{
  "method": "POST",
  "path": "/api/chat",
  "status": 200,
  "duration": "4.5s",
  "ai": {
    "calls": 1,
    "model": "claude-sonnet-4.6",
    "provider": "anthropic",
    "inputTokens": 3312,
    "outputTokens": 814,
    "totalTokens": 4126,
    "reasoningTokens": 225,
    "finishReason": "stop",
    "msToFirstChunk": 234,
    "msToFinish": 4500,
    "tokensPerSecond": 180
  }
}

How It Works

createAILogger(log, options?) returns an AILogger with the following methods:

MethodDescription
wrap(model)Wraps a language model with middleware. Accepts a model string (e.g. 'anthropic/claude-sonnet-4.6') or a LanguageModelV3 object. Works with generateText, streamText, and ToolLoopAgent.
captureEmbed(result)Manually captures token usage, model info, and dimensions from embed() or embedMany() results.
getMetadata()Returns a snapshot of the current execution metadata. See Access Metadata.
getEstimatedCost()Returns the current estimated cost in dollars when a cost map is configured.
onUpdate(callback)Subscribe to metadata updates. Fires on every step, embed, error, and integration finish.

The middleware intercepts calls at the provider level. It does not touch your callbacks, prompts, or responses. Captured data flows through the normal evlog pipeline (sampling, enrichers, drains) and lands in Axiom, Better Stack, or wherever you drain to.

Where to next

Usage Patterns

streamText, generateText, multi-step agents, RAG, multiple models — every common pattern, ready to copy.

Options

Capture tool inputs (with redaction and truncation), enable cost estimation, and handle errors.

Access Metadata

Read the captured ai data inside your handler — persist it, bill against it, or stream it to the client.

Deeper Telemetry

Add tool execution timing and total wall time with createEvlogIntegration. Compose with other middlewares.

Works With All Frameworks

evlog/ai works with any framework that evlog supports:

import { useLogger } from 'evlog'
import { createAILogger } from 'evlog/ai'

const log = useLogger(event)
const ai = createAILogger(log)