Learn how to observe your deployed AI capabilities in production using Axiom’s AI SDK to capture telemetry.
@axiomhq/ai
SDK@axiomhq/ai
is focused on providing deep integration with TypeScript applications, particularly those using Vercel’s AI SDK to interact with frontier models.@axiomhq/ai
package provides helper functions for popular libraries like Vercel’s AI SDK.
The wrapAISDKModel
function takes an existing AI model object and returns an instrumented version that will automatically generate trace data for every call.
withSpan
wrapAISDKModel
handles the automatic instrumentation, the withSpan
function allows you to add crucial business context to your traces. It creates a parent span around your LLM call and attaches metadata about the capability
and step
being executed.
wrapTool
wrapTool
and wrapTools
functions to automatically instrument your Vercel AI SDK tool definitions.
The wrapTool
helper takes your tool’s name and its definition and returns an instrumented version. This wrapper creates a dedicated child span for every tool execution, capturing its arguments, output, and any errors.
Full end-to-end code example
wrapAISDKModel
: Automatically captures telemetry for the LLM provider callwrapTool
: Instruments the tool execution with detailed spanswithSpan
: Creates a parent span that ties everything together under a business capabilityAXIOM_TOKEN
and AXIOM_DATASET
.
gen_ai.*
attributes that make your AI interactions easy to query and analyze.
Key attributes include:
gen_ai.capability.name
: The high-level capability name you defined in withSpan
.gen_ai.step.name
: The specific step within the capability.gen_ai.request.model
: The model requested for the completion.gen_ai.response.model
: The model that actually fulfilled the request.gen_ai.usage.input_tokens
: The number of tokens in the prompt.gen_ai.usage.output_tokens
: The number of tokens in the generated response.gen_ai.prompt
: The full, rendered prompt or message history sent to the model (as a JSON string).gen_ai.completion
: The full response from the model, including tool calls (as a JSON string).gen_ai.response.finish_reasons
: The reason the model stopped generating tokens (e.g., stop
, tool-calls
).gen_ai.tool.name
: The name of the executed tool.gen_ai.tool.arguments
: The arguments passed to the tool (as a JSON string).gen_ai.tool.message
: The result returned by the tool (as a JSON string).