---
title: "AI Agent Monitoring"
description: "Monitor AI agents with token usage, latency, tool execution, and error tracking."
url: https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring/
---

# Set Up AI Agent Monitoring | Sentry for Node.js

With [Sentry AI Agent Monitoring](https://docs.sentry.io/ai/monitoring/agents/dashboards.md), you can monitor and debug your AI systems with full-stack context. You'll be able to track key insights like token usage, latency, tool usage, and error rates. AI Agent Monitoring data will be fully connected to your other Sentry data like logs, errors, and traces.

## [Prerequisites](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#prerequisites)

Before setting up AI Agent Monitoring, ensure you have [tracing enabled](https://docs.sentry.io/platforms/javascript/guides/node/tracing.md) in your Sentry configuration.

## [Automatic Instrumentation](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#automatic-instrumentation)

The JavaScript SDK supports automatic instrumentation for AI libraries. Add the integration for your AI library to your Sentry configuration:

* [Vercel AI SDK](https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations/vercelai.md)
* [OpenAI](https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations/openai.md)
* [Anthropic](https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations/anthropic.md)
* [Google Gen AI SDK](https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations/google-genai.md)
* [LangChain](https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations/langchain.md)
* [LangGraph](https://docs.sentry.io/platforms/javascript/guides/node/configuration/integrations/langgraph.md)

```javascript
import * as Sentry from "___SDK_PACKAGE___";
import { openAIIntegration } from "___SDK_PACKAGE___";

Sentry.init({
  dsn: "___PUBLIC_DSN___",
  tracesSampleRate: 1.0,
  integrations: [openAIIntegration()],
});
```

## [Options](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#options)

### [Privacy Controls](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#privacy-controls)

All AI integrations support `recordInputs` and `recordOutputs` options to control whether prompts and responses are captured. Both default to `true`.

Set these to `false` if your prompts or responses contain sensitive data you don't want sent to Sentry.

```javascript
import * as Sentry from "___SDK_PACKAGE___";
import { openAIIntegration } from "___SDK_PACKAGE___";

Sentry.init({
  dsn: "___PUBLIC_DSN___",
  tracesSampleRate: 1.0,
  integrations: [
    openAIIntegration({
      recordInputs: false, // Don't capture prompts
      recordOutputs: false, // Don't capture responses
    }),
  ],
});
```

## [Tracking Conversations](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#tracking-conversations)

Tracking Conversations has **alpha** stability. Configuration options and behavior may change.

When building AI applications with multi-turn conversations, you can use `setConversationId()` to link all AI spans from the same conversation together. This allows you to analyze entire conversation flows in Sentry.

The conversation ID is automatically applied as the `gen_ai.conversation.id` attribute to all AI-related spans within the current scope. To unset the conversation ID, pass `null`.

```javascript
import * as Sentry from "___SDK_PACKAGE___";

// Set conversation ID at the start of a conversation
Sentry.setConversationId("conv_abc123");

// All subsequent AI calls will be linked to this conversation
await openai.chat.completions.create({
  model: "gpt-4",
  messages: [{ role: "user", content: "Hello" }],
});

// Later in the conversation
await openai.chat.completions.create({
  model: "gpt-4",
  messages: [
    { role: "user", content: "Hello" },
    { role: "assistant", content: "Hi there!" },
    { role: "user", content: "What's the weather?" },
  ],
});

// Both calls will have gen_ai.conversation.id: "conv_abc123"

// To unset it
Sentry.setConversationId(null);
```

## [Manual Instrumentation](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#manual-instrumentation)

If you're using a library that Sentry does not automatically instrument, you can manually instrument your code to capture spans. For your AI agents data to show up in the [AI Agents Dashboards](https://sentry.io/orgredirect/organizations/:orgslug/dashboards/?filter=onlyPrebuilt\&query=agents\&sort=mostPopular), spans must have well-defined names and data attributes.

### [Span Hierarchy](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#span-hierarchy)

When instrumenting an agent loop, spans nest like this:

```bash
── invoke_agent My Agent          (gen_ai.invoke_agent)
   ├── chat gpt-4o                (gen_ai.chat)         ← 1st LLM call
   ├── execute_tool get_weather   (gen_ai.execute_tool)  ← tool run
   ├── chat gpt-4o                (gen_ai.chat)         ← 2nd LLM call
   └── ...
```

`gen_ai.invoke_agent` is the container. `gen_ai.chat` and `gen_ai.execute_tool` spans are its children (siblings of each other). A `gen_ai.chat` span can also appear without an agent parent for standalone LLM calls.

### [AI Request Span](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#ai-request-span)

This span represents a request to an LLM model or service that generates a response based on the input prompt.

**Key attributes:**

* `gen_ai.operation.name` — Required. Set to `"chat"` for chat completions
* `gen_ai.request.model` — The model name (required)
* `gen_ai.input.messages` — The prompts sent to the LLM
* `gen_ai.output.messages` — The model's response
* `gen_ai.usage.input_tokens` / `output_tokens` — Token counts

```javascript
const messages = [
  { role: "user", parts: [{ type: "text", content: "Tell me a joke" }] },
];

await Sentry.startSpan(
  {
    op: "gen_ai.chat",
    name: "chat o3-mini",
    attributes: {
      "gen_ai.operation.name": "chat",
      "gen_ai.request.model": "o3-mini",
      "gen_ai.provider.name": "openai",
      "gen_ai.input.messages": JSON.stringify(messages),
    },
  },
  async (span) => {
    const result = await client.chat.completions.create({
      model: "o3-mini",
      messages,
    });

    span.setAttribute("gen_ai.response.model", result.model);
    span.setAttribute(
      "gen_ai.output.messages",
      JSON.stringify([
        {
          role: "assistant",
          parts: [
            { type: "text", content: result.choices[0].message.content },
          ],
        },
      ]),
    );
    span.setAttribute(
      "gen_ai.response.finish_reasons",
      JSON.stringify([result.choices[0].finish_reason]),
    );
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.prompt_tokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.completion_tokens,
    );
  },
);
```

AI Request span attributes

* The span `op` MUST be `"gen_ai.{gen_ai.operation.name}"`. (e.g. `"gen_ai.chat"`)
* The span `name` SHOULD be `"{gen_ai.operation.name} {gen_ai.request.model}"`. (e.g. `"chat o3-mini"`)
* The `gen_ai.request.model` attribute MUST be the requested model. (e.g. `"o3-mini"`)
* The `gen_ai.response.model` attribute MUST be the concrete model that responded. (e.g. `"gpt-4o-2024-08-06"`)
* If the request originates from an agent, `gen_ai.agent.name` SHOULD be set to the agent's name. (e.g. `"Weather Agent"`)
* If relevant, `gen_ai.pipeline.name` SHOULD be set to the name of the AI workflow or pipeline. (e.g. `"weather-pipeline"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

### [Request Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#request-attributes)

| Data Attribute                     | Type   | Requirement Level | Description                                                                                                     | Example                                                               |
| ---------------------------------- | ------ | ----------------- | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| `gen_ai.input.messages`            | string | optional          | List of message objects sent to the LLM. **\[0]**, **\[1]**                                                     | `'[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]'` |
| `gen_ai.tool.definitions`          | string | optional          | List of objects describing the available tools. **\[0]**                                                        | `'[{"name": "random_number", "description": "..."}]'`                 |
| `gen_ai.system_instructions`       | string | optional          | The system instructions passed to the model.                                                                    | `"You are a helpful assistant."`                                      |
| `gen_ai.request.frequency_penalty` | float  | optional          | Model configuration parameter.                                                                                  | `0.5`                                                                 |
| `gen_ai.request.max_tokens`        | int    | optional          | Model configuration parameter.                                                                                  | `500`                                                                 |
| `gen_ai.request.seed`              | string | optional          | Seed for reproducible outputs.                                                                                  | `"12345"`                                                             |
| `gen_ai.request.temperature`       | float  | optional          | Model configuration parameter.                                                                                  | `0.1`                                                                 |
| `gen_ai.request.top_k`             | int    | optional          | Limits model to K most likely next tokens.                                                                      | `40`                                                                  |
| `gen_ai.request.top_p`             | float  | optional          | Model configuration parameter.                                                                                  | `0.7`                                                                 |
| `gen_ai.request.presence_penalty`  | float  | optional          | Model configuration parameter.                                                                                  | `0.5`                                                                 |
| `gen_ai.request.messages`          | string | optional          | **Deprecated.** Use `gen_ai.input.messages` instead. List of message objects sent to the LLM. **\[0]**          | `'[{"role": "system", "content": "..."}]'`                            |
| `gen_ai.request.available_tools`   | string | optional          | **Deprecated.** Use `gen_ai.tool.definitions` instead. List of objects describing the available tools. **\[0]** | `'[{"name": "random_number", "description": "..."}]'`                 |

### [Response Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#response-attributes)

| Data Attribute                        | Type    | Requirement Level | Description                                                                                             | Example                                                                      |
| ------------------------------------- | ------- | ----------------- | ------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- |
| `gen_ai.response.model`               | string  | required          | The concrete model that responded (may differ from `gen_ai.request.model`).                             | `"gpt-4o-2024-08-06"`                                                        |
| `gen_ai.output.messages`              | string  | optional          | Stringified array of message objects representing the model's output. **\[0]**, **\[1]**                | `'[{"role": "assistant", "parts": [{"type": "text", "content": "..."}]}]'`   |
| `gen_ai.response.finish_reasons`      | string  | optional          | Stringified array of reasons the model stopped generating. **\[0]**                                     | `'["stop"]'`                                                                 |
| `gen_ai.response.id`                  | string  | optional          | Unique identifier for the completion.                                                                   | `"chatcmpl-abc123"`                                                          |
| `gen_ai.response.streaming`           | boolean | optional          | Whether the response was streamed.                                                                      | `true`                                                                       |
| `gen_ai.response.time_to_first_token` | double  | optional          | Seconds until first response chunk in streaming.                                                        | `0.5`                                                                        |
| `gen_ai.response.tokens_per_second`   | double  | optional          | Output tokens per second throughput.                                                                    | `50.0`                                                                       |
| `gen_ai.response.text`                | string  | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The text representation of the model's responses. | `"The weather in Paris is rainy"`                                            |
| `gen_ai.response.tool_calls`          | string  | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The tool calls in the model's response. **\[0]**  | `'[{"name": "random_number", "type": "function_call", "arguments": "..."}]'` |

### [Token Usage](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#token-usage)

| Data Attribute                          | Type | Requirement Level | Description                                                                           | Example |
| --------------------------------------- | ---- | ----------------- | ------------------------------------------------------------------------------------- | ------- |
| `gen_ai.usage.input_tokens`             | int  | optional          | The number of tokens used in the AI input (prompt), including cached tokens. **\[2]** | `60`    |
| `gen_ai.usage.input_tokens.cached`      | int  | optional          | The number of cached tokens used in the AI input (prompt).                            | `50`    |
| `gen_ai.usage.input_tokens.cache_write` | int  | optional          | Tokens written to cache when processing input.                                        | `20`    |
| `gen_ai.usage.output_tokens`            | int  | optional          | The number of tokens used in the AI output, including reasoning tokens. **\[3]**      | `130`   |
| `gen_ai.usage.output_tokens.reasoning`  | int  | optional          | The number of tokens used for reasoning.                                              | `30`    |
| `gen_ai.usage.total_tokens`             | int  | optional          | The sum of `gen_ai.usage.input_tokens` and `gen_ai.usage.output_tokens`.              | `190`   |

### [Cost](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#cost)

| Data Attribute              | Type   | Requirement Level | Description                                       | Example |
| --------------------------- | ------ | ----------------- | ------------------------------------------------- | ------- |
| `gen_ai.cost.input_tokens`  | double | optional          | Cost of input tokens in USD (without cached).     | `0.005` |
| `gen_ai.cost.output_tokens` | double | optional          | Cost of output tokens in USD (without reasoning). | `0.015` |
| `gen_ai.cost.total_tokens`  | double | optional          | Total cost for tokens used.                       | `0.020` |

* **\[0]:** Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `'[{"foo": "bar"}]'` (must be parsable JSON).
* **\[1]:** Messages use the format `{role, parts}` where `parts` is an array of typed objects: `[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]`. The `role` must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For backwards compatibility, the legacy format `{role, content}` is also accepted.
* **\[2]:** Cached tokens are a subset of input tokens; `gen_ai.usage.input_tokens` includes `gen_ai.usage.input_tokens.cached`.
* **\[3]:** Reasoning tokens are a subset of output tokens; `gen_ai.usage.output_tokens` includes `gen_ai.usage.output_tokens.reasoning`.

### [Invoke Agent Span](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#invoke-agent-span)

For a complete guide on naming agents across all supported frameworks, see [Naming Your Agents](https://docs.sentry.io/ai/monitoring/agents/naming.md).

This span represents the execution of an AI agent, capturing the full lifecycle from receiving a task to producing a final response.

**Key attributes:**

* `gen_ai.operation.name` — Required. Set to `"invoke_agent"`
* `gen_ai.agent.name` — The agent's name (e.g., "Weather Agent")
* `gen_ai.request.model` — The underlying model used
* `gen_ai.output.messages` — The agent's final output
* `gen_ai.usage.input_tokens` / `output_tokens` — Total token counts

```javascript
await Sentry.startSpan(
  {
    op: "gen_ai.invoke_agent",
    name: "invoke_agent Weather Agent",
    attributes: {
      "gen_ai.operation.name": "invoke_agent",
      "gen_ai.request.model": "o3-mini",
      "gen_ai.agent.name": "Weather Agent",
    },
  },
  async (span) => {
    const result = await myAgent.run();

    span.setAttribute(
      "gen_ai.output.messages",
      JSON.stringify([
        {
          role: "assistant",
          parts: [{ type: "text", content: result.output }],
        },
      ]),
    );
    span.setAttribute(
      "gen_ai.usage.input_tokens",
      result.usage.inputTokens,
    );
    span.setAttribute(
      "gen_ai.usage.output_tokens",
      result.usage.outputTokens,
    );
  },
);
```

Invoke Agent span attributes

Describes AI agent invocation.

* The span `op` MUST be `"gen_ai.invoke_agent"`.
* The span `name` SHOULD be `"invoke_agent {gen_ai.agent.name}"`.
* The `gen_ai.operation.name` attribute MUST be `"invoke_agent"`.
* The `gen_ai.agent.name` attribute SHOULD be set to the agent's name. (e.g. `"Weather Agent"`)
* If relevant, `gen_ai.pipeline.name` SHOULD be set to the name of the AI workflow or pipeline the agent belongs to.
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

### [Request Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#request-attributes)

| Data Attribute                   | Type   | Requirement Level | Description                                                                                                     | Example                                                               |
| -------------------------------- | ------ | ----------------- | --------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------- |
| `gen_ai.input.messages`          | string | optional          | List of message objects given to the agent. **\[0]**, **\[1]**                                                  | `'[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]'` |
| `gen_ai.tool.definitions`        | string | optional          | List of objects describing the available tools. **\[0]**                                                        | `'[{"name": "random_number", "description": "..."}]'`                 |
| `gen_ai.system_instructions`     | string | optional          | The system instructions passed to the model.                                                                    | `"You are a helpful assistant."`                                      |
| `gen_ai.pipeline.name`           | string | optional          | The name of the AI workflow or pipeline the agent belongs to.                                                   | `"weather-pipeline"`                                                  |
| `gen_ai.request.messages`        | string | optional          | **Deprecated.** Use `gen_ai.input.messages` instead. List of message objects given to the agent. **\[0]**       | `'[{"role": "system", "content": "..."}]'`                            |
| `gen_ai.request.available_tools` | string | optional          | **Deprecated.** Use `gen_ai.tool.definitions` instead. List of objects describing the available tools. **\[0]** | `'[{"name": "random_number", "description": "..."}]'`                 |

### [Response Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#response-attributes)

| Data Attribute               | Type   | Requirement Level | Description                                                                                            | Example                                                                      |
| ---------------------------- | ------ | ----------------- | ------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------- |
| `gen_ai.output.messages`     | string | optional          | Stringified array of message objects representing the agent's output. **\[0]**, **\[1]**               | `'[{"role": "assistant", "parts": [{"type": "text", "content": "..."}]}]'`   |
| `gen_ai.response.text`       | string | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The text representation of the agent's response. | `"The weather in Paris is rainy"`                                            |
| `gen_ai.response.tool_calls` | string | optional          | **Deprecated.** Use `gen_ai.output.messages` instead. The tool calls in the model's response. **\[0]** | `'[{"name": "random_number", "type": "function_call", "arguments": "..."}]'` |

### [Token Usage](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#token-usage)

| Data Attribute                          | Type | Requirement Level | Description                                                                           | Example |
| --------------------------------------- | ---- | ----------------- | ------------------------------------------------------------------------------------- | ------- |
| `gen_ai.usage.input_tokens`             | int  | optional          | The number of tokens used in the AI input (prompt), including cached tokens. **\[2]** | `60`    |
| `gen_ai.usage.input_tokens.cached`      | int  | optional          | The number of cached tokens used in the AI input (prompt).                            | `50`    |
| `gen_ai.usage.input_tokens.cache_write` | int  | optional          | Tokens written to cache when processing input.                                        | `20`    |
| `gen_ai.usage.output_tokens`            | int  | optional          | The number of tokens used in the AI output, including reasoning tokens. **\[3]**      | `130`   |
| `gen_ai.usage.output_tokens.reasoning`  | int  | optional          | The number of tokens used for reasoning.                                              | `30`    |
| `gen_ai.usage.total_tokens`             | int  | optional          | The sum of `gen_ai.usage.input_tokens` and `gen_ai.usage.output_tokens`.              | `190`   |

### [Cost](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#cost)

| Data Attribute              | Type   | Requirement Level | Description                                       | Example |
| --------------------------- | ------ | ----------------- | ------------------------------------------------- | ------- |
| `gen_ai.cost.input_tokens`  | double | optional          | Cost of input tokens in USD (without cached).     | `0.005` |
| `gen_ai.cost.output_tokens` | double | optional          | Cost of output tokens in USD (without reasoning). | `0.015` |
| `gen_ai.cost.total_tokens`  | double | optional          | Total cost for tokens used.                       | `0.020` |

* **\[0]:** Span attributes only allow primitive data types. This means you need to use a stringified version of a list of dictionaries. Do NOT set `[{"foo": "bar"}]` but rather the string `'[{"foo": "bar"}]'` (must be parsable JSON).
* **\[1]:** Messages use the format `{role, parts}` where `parts` is an array of typed objects: `[{"role": "user", "parts": [{"type": "text", "content": "..."}]}]`. The `role` must be `"user"`, `"assistant"`, `"tool"`, or `"system"`. For backwards compatibility, the legacy format `{role, content}` is also accepted.
* **\[2]:** Cached tokens are a subset of input tokens; `gen_ai.usage.input_tokens` includes `gen_ai.usage.input_tokens.cached`.
* **\[3]:** Reasoning tokens are a subset of output tokens; `gen_ai.usage.output_tokens` includes `gen_ai.usage.output_tokens.reasoning`.

### [Execute Tool Span](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#execute-tool-span)

This span represents the execution of a tool or function that was requested by an AI model, including the input arguments and resulting output.

**Key attributes:**

* `gen_ai.operation.name` — Required. Set to `"execute_tool"`
* `gen_ai.tool.name` — The tool's name (e.g., "get\_weather")
* `gen_ai.tool.call.arguments` — The arguments passed to the tool
* `gen_ai.tool.call.result` — The tool's return value

```javascript
await Sentry.startSpan(
  {
    op: "gen_ai.execute_tool",
    name: "execute_tool get_weather",
    attributes: {
      "gen_ai.operation.name": "execute_tool",
      "gen_ai.tool.name": "get_weather",
      "gen_ai.tool.call.arguments": JSON.stringify({ location: "Paris" }),
    },
  },
  async (span) => {
    const result = await getWeather({ location: "Paris" });

    span.setAttribute("gen_ai.tool.call.result", JSON.stringify(result));
  },
);
```

Execute Tool span attributes

Describes a tool execution.

* The span `op` MUST be `"gen_ai.execute_tool"`.
* The span `name` SHOULD be `"execute_tool {gen_ai.tool.name}"`. (e.g. `"execute_tool query_database"`)
* The `gen_ai.tool.name` attribute SHOULD be set to the name of the tool. (e.g. `"query_database"`)
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#common-span-attributes) SHOULD be set (all `required` common attributes MUST be set).

Additional attributes on the span:

| Data Attribute               | Type   | Requirement Level | Description                                                                                           | Example                                    |
| ---------------------------- | ------ | ----------------- | ----------------------------------------------------------------------------------------------------- | ------------------------------------------ |
| `gen_ai.tool.name`           | string | optional          | Name of the tool executed.                                                                            | `"random_number"`                          |
| `gen_ai.tool.call.arguments` | string | optional          | Arguments of the tool call (stringified JSON).                                                        | `"{\"max\":10}"`                           |
| `gen_ai.tool.call.result`    | string | optional          | Result of the tool call (stringified).                                                                | `"7"`                                      |
| `gen_ai.tool.description`    | string | optional          | Description of the tool executed.                                                                     | `"Tool returning a random number"`         |
| `gen_ai.tool.type`           | string | optional          | The type of the tools.                                                                                | `"function"`; `"extension"`; `"datastore"` |
| `gen_ai.tool.input`          | string | optional          | **Deprecated.** Use `gen_ai.tool.call.arguments` instead. Input given to the executed tool as string. | `"{\"max\":10}"`                           |
| `gen_ai.tool.output`         | string | optional          | **Deprecated.** Use `gen_ai.tool.call.result` instead. The output from the tool.                      | `"7"`                                      |

### [Handoff Span](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#handoff-span)

This span marks the transition of control from one agent to another, typically when the current agent determines another agent is better suited to handle the task.

**Requirements:**

* `op` must be `"gen_ai.handoff"`
* `name` should follow the pattern `"handoff from {source} to {target}"`
* All [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#common-span-attributes) should be set

The handoff span itself has no body — it just marks the transition point before the target agent starts.

```javascript
await Sentry.startSpan(
  {
    op: "gen_ai.handoff",
    name: "handoff from Weather Agent to Travel Agent",
  },
  () => {}, // Handoff span just marks the transition
);

await Sentry.startSpan(
  { op: "gen_ai.invoke_agent", name: "invoke_agent Travel Agent" },
  async () => {
    // Run the target agent here
  },
);
```

### [Streaming Responses](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#streaming-responses)

When the LLM returns a stream, the span must outlive the initial callback. Use `Sentry.startInactiveSpan` to create the span, then end it when the stream finishes:

```javascript
async function callLLMStreaming(model, messages) {
  const span = Sentry.startInactiveSpan({
    name: `chat ${model}`,
    op: "gen_ai.chat",
    attributes: {
      "gen_ai.operation.name": "chat",
      "gen_ai.request.model": model,
      "gen_ai.input.messages": JSON.stringify(messages),
    },
  });

  try {
    const stream = await Sentry.withActiveSpan(span, () =>
      yourLLMClient.stream({ model, messages }),
    );

    stream.on("end", (finalMessage) => {
      span.setAttribute(
        "gen_ai.output.messages",
        JSON.stringify([
          {
            role: "assistant",
            parts: [{ type: "text", content: finalMessage.text }],
          },
        ]),
      );
      span.setAttribute(
        "gen_ai.usage.input_tokens",
        finalMessage.usage.input,
      );
      span.setAttribute(
        "gen_ai.usage.output_tokens",
        finalMessage.usage.output,
      );
      span.setAttribute("gen_ai.response.model", finalMessage.model);
      span.setAttribute("gen_ai.response.streaming", true);
      span.end();
    });

    stream.on("error", () => span.end());
    return stream;
  } catch (error) {
    span.end();
    throw error;
  }
}
```

`startInactiveSpan` creates a span without automatically ending it. `Sentry.withActiveSpan` propagates context so any child spans nest correctly. Call `span.end()` when the stream completes or errors.

## [Common Span Attributes](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#common-span-attributes)

Some attributes are common to all AI Agents spans:

| Data Attribute          | Type   | Requirement Level | Description                                                                      | Example    |
| ----------------------- | ------ | ----------------- | -------------------------------------------------------------------------------- | ---------- |
| `gen_ai.operation.name` | string | required          | The name of the operation being performed. **\[4]**                              | `"chat"`   |
| `gen_ai.provider.name`  | string | optional          | The Generative AI product as identified by the client or server instrumentation. | `"openai"` |

* **\[4]:** `gen_ai.operation.name` is what Sentry uses to classify spans in AI dashboards. Well-defined values include: `"chat"`, `"invoke_agent"`, `"execute_tool"`, `"embeddings"`, `"generate_content"`, `"text_completion"`, `"create_agent"`, `"handoff"`.

Well-defined values for `gen_ai.provider.name`: `"anthropic"`, `"aws.bedrock"`, `"azure.ai.inference"`, `"azure.ai.openai"`, `"cohere"`, `"deepseek"`, `"gcp.gemini"`, `"gcp.gen_ai"`, `"gcp.vertex_ai"`, `"groq"`, `"ibm.watsonx.ai"`, `"mistral_ai"`, `"openai"`, `"perplexity"`, `"x_ai"`.

## [Token Usage and Cost Gotchas](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#token-usage-and-cost-gotchas)

When manually setting token attributes, be aware of how Sentry uses them to [calculate model costs](https://docs.sentry.io/ai/monitoring/agents/costs.md).

**Cached and reasoning tokens are subsets, not separate counts.** `gen_ai.usage.input_tokens` is the **total** input token count that already includes any cached tokens. Similarly, `gen_ai.usage.output_tokens` already includes reasoning tokens. Sentry subtracts the cached/reasoning counts from the totals to compute the "raw" portion, so reporting them incorrectly can produce wrong or negative costs.

For example, say your LLM call uses 100 input tokens total, 90 of which were served from cache. Using a standard rate of $0.01 per token and a cached rate of $0.001 per token:

**Correct** — `input_tokens` is the total (includes cached):

* `gen_ai.usage.input_tokens = 100`
* `gen_ai.usage.input_tokens.cached = 90`
* Sentry calculates: `(100 - 90) × $0.01 + 90 × $0.001` = `$0.10 + $0.09` = **$0.19** ✓

**Wrong** — `input_tokens` set to only the non-cached tokens, making cached larger than total:

* `gen_ai.usage.input_tokens = 10`
* `gen_ai.usage.input_tokens.cached = 90`
* Sentry calculates: `(10 - 90) × $0.01 + 90 × $0.001` = `−$0.80 + $0.09` = **−$0.71**

Because `input_tokens.cached` (90) is larger than `input_tokens` (10), the subtraction goes negative, resulting in a negative total cost.

The same applies to `gen_ai.usage.output_tokens` and `gen_ai.usage.output_tokens.reasoning`.

## [Framework Exporters](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#framework-exporters)

If you're using an AI framework with a Sentry exporter, you can send traces to Sentry:

* [Mastra Sentry Exporter](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring/mastra.md)

## [MCP Server Monitoring](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring.md#mcp-server-monitoring)

If you're building MCP (Model Context Protocol) servers, Sentry can also track tool executions, prompt retrievals, and resource access. See [Instrument MCP Servers](https://docs.sentry.io/platforms/javascript/guides/node/tracing/instrumentation/mcp-module.md) for setup instructions.

## Pages in this section

- [Mastra](https://docs.sentry.io/platforms/javascript/guides/node/ai-agent-monitoring/mastra.md)
