Loading workspace insights... Statistics interval
7 days30 daysLatest CI Pipeline Executions
3681c9e8 Create migration guide from AI SDK to TanStack (#179)
* docs: add migration guide from Vercel AI SDK to TanStack AI
Add comprehensive migration guide covering:
- Package installation differences
- Server-side API migration (streamText -> chat)
- Client-side useChat hook differences
- Isomorphic tool system migration
- Provider adapter changes (OpenAI, Anthropic, Gemini)
- Streaming response formats
- Multimodal content handling
- Type safety enhancements
- Complete before/after code examples
* docs: address review feedback on Vercel AI migration guide
- Installation: add @ai-sdk/react to v5+ deps and update quick-reference
table to show the v5 framework package names
- System prompts: use root-level systemPrompts: [...] instead of
prepending a system message to the messages array (verified against
packages/typescript/ai/src/types.ts)
- useChat API table: rewrite against current Vercel AI SDK v5+ API
(sendMessage, status, regenerate, DefaultChatTransport) so the
comparison is accurate rather than mixing v4/v5
- MessagePart: expand to full discriminated union with real field names
(arguments/input/approval on tool-call, content on tool-result) and
real ToolCallState values
- Fix nonexistent toStreamResponse references -> toServerSentEventsResponse
(and add toHttpResponse where appropriate)
- Fix AbortController section heading (h4 -> h3, resolves MD001)
- Update tool schema section to note parameters -> inputSchema rename
in AI SDK v5
- Tighten tool approval example with optional chaining and a note on
arguments vs parsed input
* docs: expand Vercel AI SDK migration guide with full option + v6 coverage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.)
* docs: round-1 CR fixes on Vercel AI migration guide
Factual corrections verified against source:
- Multimodal image source shape uses { type: 'url'|'data', value, mimeType }
not { url, base64, mediaType } (types.ts:142-183)
- toolCacheMiddleware is exported from @tanstack/ai/middlewares, not the
root (packages/typescript/ai/src/middlewares/index.ts)
- toolDefinition({ description }) is required; add it to the two doc
examples that were missing it (tool-definition.ts:31)
- stream() connection adapter factory is (messages, data?) with no
signal arg; rewrite custom-adapter example (connection-adapters.ts:441)
AI SDK v6 accuracy:
- addToolResult -> addToolOutput (v6 rename)
- experimental_output -> output (de-experimentalized)
- Soften "replaced" claim about generateObject/streamObject — they are
deprecated, not removed
- Vercel addToolApprovalResponse row: v6 has this; replace "N/A"
- First Basic Text Generation Before example now uses v5+ API
(convertToModelMessages + toUIMessageStreamResponse) with a v4
toDataStreamResponse callout
Consistency:
- Agent-loop tables reconciled: only one truly built-in strategy
(maxIterations / untilFinishReason / combineStrategies); hasToolCall
requires a custom AgentLoopStrategy. Both tables now agree.
- prepareStep Before/After actually demonstrates equivalent behavior:
Before shows step-level config tweak, After uses onConfig;
mid-loop model switching split into its own subsection with the
two-chat pattern the prose describes
- Message Structure section qualifies that ToolCallPart.input is the
ai-client projection (server-side reads arguments directly)
- toHttpStream/Response comment in client connection example clarified
- Complete Example clarifies why convertToModelMessages disappears in
the After (chat() accepts UI messages directly)
- clientTools() auto-execution comment expanded to state that no
onToolCall/addToolOutput call is needed
- Anchor slug for Structured Output simplified to #structured-output
Rot hygiene:
- "current releases" removed from v5/v6 note
- "Every option" softened to "Options accepted ... as of AI SDK v6"
- "now expose" / "AI SDK v6 offers" / "v6 consolidated" reworded to
avoid tense decay across future releases
* docs: use toServerSentEventsResponse/toHttpResponse init options
Both helpers accept ResponseInit & { abortController }, so custom headers,
status, and cancellation flow through the helpers directly. Drop the
hand-rolled `new Response(toServerSentEventsStream(...), { headers: {...} })`
example and keep the raw stream helpers only for the genuine "pipe elsewhere"
case.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Co-authored-by: Jack Herrington <jherr@pobox.com> 3681c9e8 Create migration guide from AI SDK to TanStack (#179)
* docs: add migration guide from Vercel AI SDK to TanStack AI
Add comprehensive migration guide covering:
- Package installation differences
- Server-side API migration (streamText -> chat)
- Client-side useChat hook differences
- Isomorphic tool system migration
- Provider adapter changes (OpenAI, Anthropic, Gemini)
- Streaming response formats
- Multimodal content handling
- Type safety enhancements
- Complete before/after code examples
* docs: address review feedback on Vercel AI migration guide
- Installation: add @ai-sdk/react to v5+ deps and update quick-reference
table to show the v5 framework package names
- System prompts: use root-level systemPrompts: [...] instead of
prepending a system message to the messages array (verified against
packages/typescript/ai/src/types.ts)
- useChat API table: rewrite against current Vercel AI SDK v5+ API
(sendMessage, status, regenerate, DefaultChatTransport) so the
comparison is accurate rather than mixing v4/v5
- MessagePart: expand to full discriminated union with real field names
(arguments/input/approval on tool-call, content on tool-result) and
real ToolCallState values
- Fix nonexistent toStreamResponse references -> toServerSentEventsResponse
(and add toHttpResponse where appropriate)
- Fix AbortController section heading (h4 -> h3, resolves MD001)
- Update tool schema section to note parameters -> inputSchema rename
in AI SDK v5
- Tighten tool approval example with optional chaining and a note on
arguments vs parsed input
* docs: expand Vercel AI SDK migration guide with full option + v6 coverage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.)
* docs: round-1 CR fixes on Vercel AI migration guide
Factual corrections verified against source:
- Multimodal image source shape uses { type: 'url'|'data', value, mimeType }
not { url, base64, mediaType } (types.ts:142-183)
- toolCacheMiddleware is exported from @tanstack/ai/middlewares, not the
root (packages/typescript/ai/src/middlewares/index.ts)
- toolDefinition({ description }) is required; add it to the two doc
examples that were missing it (tool-definition.ts:31)
- stream() connection adapter factory is (messages, data?) with no
signal arg; rewrite custom-adapter example (connection-adapters.ts:441)
AI SDK v6 accuracy:
- addToolResult -> addToolOutput (v6 rename)
- experimental_output -> output (de-experimentalized)
- Soften "replaced" claim about generateObject/streamObject — they are
deprecated, not removed
- Vercel addToolApprovalResponse row: v6 has this; replace "N/A"
- First Basic Text Generation Before example now uses v5+ API
(convertToModelMessages + toUIMessageStreamResponse) with a v4
toDataStreamResponse callout
Consistency:
- Agent-loop tables reconciled: only one truly built-in strategy
(maxIterations / untilFinishReason / combineStrategies); hasToolCall
requires a custom AgentLoopStrategy. Both tables now agree.
- prepareStep Before/After actually demonstrates equivalent behavior:
Before shows step-level config tweak, After uses onConfig;
mid-loop model switching split into its own subsection with the
two-chat pattern the prose describes
- Message Structure section qualifies that ToolCallPart.input is the
ai-client projection (server-side reads arguments directly)
- toHttpStream/Response comment in client connection example clarified
- Complete Example clarifies why convertToModelMessages disappears in
the After (chat() accepts UI messages directly)
- clientTools() auto-execution comment expanded to state that no
onToolCall/addToolOutput call is needed
- Anchor slug for Structured Output simplified to #structured-output
Rot hygiene:
- "current releases" removed from v5/v6 note
- "Every option" softened to "Options accepted ... as of AI SDK v6"
- "now expose" / "AI SDK v6 offers" / "v6 consolidated" reworded to
avoid tense decay across future releases
* docs: use toServerSentEventsResponse/toHttpResponse init options
Both helpers accept ResponseInit & { abortController }, so custom headers,
status, and cancellation flow through the helpers directly. Drop the
hand-rolled `new Response(toServerSentEventsStream(...), { headers: {...} })`
example and keep the raw stream helpers only for the genuine "pipe elsewhere"
case.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Co-authored-by: Jack Herrington <jherr@pobox.com> 12d43e55 chore: add retroactive changesets for PR #411 (AG-UI core interop) (#474)
chore: add changesets for PR #411 (AG-UI core interop)
The original PR was merged without changesets. This adds retroactive
changesets so the release pipeline picks up the breaking event-shape
work across core, adapters, client, and event-client.
- @tanstack/ai: major (flat RunErrorEvent, REASONING_* events, threadId/runId,
strip-to-spec middleware, AG-UI core EventType re-export, JSON-patch
StateDelta)
- All provider adapters (openai, anthropic, gemini, ollama, openrouter, grok,
groq): patch — emit AG-UI-compliant shapes; no public API change
- @tanstack/ai-client: patch — consumer side of the new event shapes
- @tanstack/ai-event-client: patch — devtools middleware alignment 12d43e55 chore: add retroactive changesets for PR #411 (AG-UI core interop) (#474)
chore: add changesets for PR #411 (AG-UI core interop)
The original PR was merged without changesets. This adds retroactive
changesets so the release pipeline picks up the breaking event-shape
work across core, adapters, client, and event-client.
- @tanstack/ai: major (flat RunErrorEvent, REASONING_* events, threadId/runId,
strip-to-spec middleware, AG-UI core EventType re-export, JSON-patch
StateDelta)
- All provider adapters (openai, anthropic, gemini, ollama, openrouter, grok,
groq): patch — emit AG-UI-compliant shapes; no public API change
- @tanstack/ai-client: patch — consumer side of the new event shapes
- @tanstack/ai-event-client: patch — devtools middleware alignment 91ec2053 feat: AG-UI core interop - spec-compliant event types (#411)
* feat(ai): add @ag-ui/core as dependency for spec-compliant event types
* feat(ai): redefine AG-UI event types extending @ag-ui/core
Replace custom AG-UI event types with interfaces that extend @ag-ui/core
types for spec compliance. This is the foundational type change for the
AG-UI protocol alignment.
- Import all event types from @ag-ui/core with AGUI* aliases
- Replace BaseAGUIEvent to extend @ag-ui/core BaseEvent
- Replace each event interface to extend its @ag-ui/core equivalent
- Add TanStack-internal extension fields (model, deprecated aliases)
- Add new event types: ToolCallResultEvent, Reasoning* events
- Deprecate AGUIEventType in favor of EventType enum
- Re-export EventType enum from @ag-ui/core
- Add threadId/runId to TextOptions interface
- Update AGUIEvent union and StreamChunk type alias
* feat(ai): add stripToSpec middleware to strip non-spec fields from stream events
Creates a middleware that removes TanStack-internal extension fields
(model, rawEvent, deprecated aliases) from StreamChunk events so the
yielded stream is @ag-ui/core spec-compliant. Registered as the last
middleware in the chat activity chain so devtools and user middleware
still see the full extended events.
* feat(ai): plumb threadId and runId through chat() to adapters
Add threadId/runId to TextActivityOptions interface and TextEngine class
so they flow from user-facing chat() options through to adapter.chatStream().
ThreadId is auto-generated if not provided. Adapters will consume these
in subsequent tasks to include them in RUN_STARTED/RUN_FINISHED events.
* feat(ai-openai): update text adapter for AG-UI spec compliance
Add threadId to RUN_STARTED/RUN_FINISHED events, toolCallName to
TOOL_CALL_START/TOOL_CALL_END, stepName to STEP_STARTED/STEP_FINISHED,
flatten RUN_ERROR with top-level message/code fields, and emit
REASONING_START/MESSAGE_START/CONTENT/MESSAGE_END/END events alongside
legacy STEP events for reasoning content.
* fix: update tests and fix type errors for AG-UI spec compliance
Update test utilities and tests to use AG-UI spec field names:
- Add threadId to RUN_STARTED/RUN_FINISHED events
- Add toolCallName alongside deprecated toolName on tool events
- Add stepName alongside deprecated stepId on step events
- Use flat message field on RUN_ERROR (with deprecated error nested form)
Fix critical bugs discovered during testing:
- StreamProcessor: prefer chunk.message over chunk.error?.message for RUN_ERROR
- TextEngine: process original chunks for internal state before middleware strips fields
- Remove auto-applied stripToSpecMiddleware from chat() (breaks internal state since
it strips finishReason, delta, content needed by TextEngine and StreamProcessor)
- Fix type compatibility issues with @ag-ui/core EventType enum vs string literals
Also fix type errors in:
- stream-generation-result.ts: use EventType enum and add threadId
- generateVideo/index.ts: add StreamChunk casts and threadId
- tool-calls.ts: cast TOOL_CALL_END yield to ToolCallEndEvent
- devtools-middleware.ts: handle toolCallName fallback and RUN_ERROR message field
- processor.ts: handle developer role, Messages snapshot type cast, finishReason undefined
* fix(ai): re-add stripToSpec middleware, process raw chunks internally, fix test assertions
* fix(ai): pipe tool-phase events through middleware, strip toolCallName, fix type errors
- Fix EventType enum vs string literal type errors in test files by
relaxing chunk helper type params and adding cast helpers
- Pipe tool-phase events (TOOL_CALL_END, TOOL_CALL_RESULT, CUSTOM)
through the middleware pipeline so strip-to-spec and devtools
middleware observe all events, not just model-stream events
- Add toolCallName to TOOL_CALL_END strip set in strip-to-spec
middleware since AG-UI spec ToolCallEndEvent only has toolCallId
- Update test assertions to use TOOL_CALL_RESULT (spec event) instead
of checking stripped fields on TOOL_CALL_END
* style: format test files
* test(ai): add tests for REASONING events, TOOL_CALL_RESULT, threadId, and strip compliance
* fix: resolve eslint, type, and test failures across all packages
- Fix 5 ESLint errors in @tanstack/ai (array-type, no-unnecessary-condition,
no-unnecessary-type-assertion, sort-imports)
- Fix ESLint error in @tanstack/ai-event-client (no-unnecessary-condition)
- Fix string literal vs EventType enum type errors across all 7 adapter
packages by adding asChunk helper that casts event objects to StreamChunk
- Fix @tanstack/ai-client source type errors (chunk.error possibly undefined,
runId access on RUN_ERROR events, connection-adapters push calls)
- Fix @tanstack/ai-client and @tanstack/ai-openrouter test type errors
- Fix tool-call-manager tests to use toolCallName instead of deprecated toolName
* fix: resolve eslint and test failures from strip-to-spec middleware
Remove assertions for fields (content, finishReason, usage) that the
stripToSpec middleware now strips from emitted events. Fix unnecessary
nullish coalescing in ai-openai and add type casts in ai-vue tests.
* fix: honor caller runId, prevent duplicate thinking, add error IDs, fix reasoning ordering
- Honor caller-provided runId/threadId in all 7 adapters using ?? fallback
- Prevent duplicate thinking content from dual STEP_FINISHED/REASONING_MESSAGE_CONTENT events
- Assert exact threadId value in chat test instead of just toBeDefined
- Add runId/threadId to RUN_ERROR in generateVideo and stream-generation-result
- Move reasoning processing before content processing in OpenRouter adapter
* fix(smoke-tests): update harness to read spec-compliant event fields
TOOL_CALL_START now uses toolCallName (spec) instead of toolName (deprecated).
TOOL_CALL_END fields (toolName, input, result) are stripped by spec middleware;
harness now falls back to data captured during START/ARGS phases.
Added TOOL_CALL_RESULT handler for spec-compliant tool result delivery.
RUN_FINISHED finishReason/usage are optional extensions.
* ci: apply automated fixes
* fix(smoke-tests): remove unnecessary as-any casts, use proper type narrowing
* ci: apply automated fixes
* fix(ai-ollama): emit TOOL_CALL_ARGS before TOOL_CALL_END for spec compliance
Ollama doesn't stream tool args incrementally — it delivers them all at once
in TOOL_CALL_END.input. Since the strip middleware removes input from
TOOL_CALL_END, consumers had no way to get the args. Now emits a
TOOL_CALL_ARGS event with the full args as delta before TOOL_CALL_END.
* fix(ai): hide strip-to-spec middleware from devtools instrumentation
* fix(ai): handle TOOL_CALL_RESULT in StreamProcessor to create tool-result parts
Root cause: The strip middleware removes 'result' from TOOL_CALL_END events.
The StreamProcessor's TOOL_CALL_END handler only creates tool-result parts
when chunk.result is present. With it stripped, no tool-result parts were
created on the client side.
TOOL_CALL_RESULT events (spec-compliant tool result delivery) were received
but ignored (no-op). Without tool-result parts, areAllToolsComplete() behaved
incorrectly, and the client could not detect server tool completion.
Fix: Handle TOOL_CALL_RESULT by creating tool-result parts and updating
tool-call output, mirroring TOOL_CALL_END's result handling logic.
* fix(ai): stop stripping finishReason from RUN_FINISHED events
finishReason is essential for client-side continuation logic. Without it,
the chat-client cannot distinguish 'stop' (no continuation needed) from
'tool_calls' (client tools need execution), causing infinite request loops
when server-side tool results leave tool-call parts as the last message part.
* fix(ai): only strip deprecated aliases and rawEvent, keep all extras
@ag-ui/core BaseEventSchema uses .passthrough(), so extra fields are allowed
and won't break spec validation. Only strip:
- Deprecated aliases: toolName, stepId, state (nudge toward spec names)
- Deprecated nested error object on RUN_ERROR
- rawEvent (debug payload, potentially large)
Keep everything else: model, content, args, usage, finishReason, input,
result, index, providerMetadata, stepType, delta, etc.
* ci: apply automated fixes
* fix(ai): stop stripping fields — passthrough allows all extras
@ag-ui/core BaseEventSchema uses .passthrough() so extra fields are allowed.
Only strip the deprecated nested error object from RUN_ERROR (conflicts with
spec's flat message/code). Everything else passes through: model, content,
toolName, stepId, usage, finishReason, result, input, args, etc.
* fix: resolve type errors from @ag-ui/core Zod passthrough types
Zod passthrough adds `& { [k: string]: unknown }` to inferred types,
preventing TypeScript from narrowing the `type` discriminant in switch
statements. Add explicit casts where needed. Also fix toolCallName ->
toolName rename in realtime types to match consumer code.
* ci: apply automated fixes
* chore(ai): bump @ag-ui/core from 0.0.48 to 0.0.49
Removes rxjs from the transitive dependency tree. All exported types
and EventType enum values are identical between versions.
* fix: CR fixes for AG-UI core interop
- Use this.threadId in createSyntheticFinishedEvent instead of
regenerating a new ID on each call
- Add defensive delta guard in handleReasoningMessageContentEvent
matching sibling handler patterns
- Prefer spec chunk.message over deprecated chunk.error in devtools
middleware, generation client, and video generation client
- Add flat message field to synthesized RUN_ERROR in connection adapters
- Fix processChunk JSDoc listing RUN_STARTED as ignored (it has a handler)
- Fix comment referencing toolName when code uses toolCallName
- Document RUN_ERROR in stream-generation-result @returns
- Add meaningful assertions to TOOL_CALL_RESULT processor test
- Clarify threadId test describes adapter passthrough behavior
* ci: apply automated fixes
* fix(ai-client): use cast for RUN_ERROR message to satisfy eslint
chunk.message is typed as required string by @ag-ui/core but may be
absent at runtime from events constructed via as-unknown casts.
Cast to string|undefined to allow the || fallback chain while keeping
the no-unnecessary-condition rule happy.
* fix(ai): add StreamChunk casts for TOOL_CALL_START/ARGS in continuation re-executions
The merge from main brought in #372 which emits these events but used
string literals instead of the AGUIEvent enum, breaking the build.
* fix(ai-openrouter): prevent duplicate TEXT_MESSAGE_END and RUN_FINISHED events
OpenAI-compatible APIs often send two chunks with finishReason — one for
the finish signal and a separate trailing chunk carrying usage data. The
adapter had no guard against this, causing TEXT_MESSAGE_END and
RUN_FINISHED to be emitted twice per run.
Root cause: processChoice emitted finish events on every finishReason
occurrence without tracking whether they had already been sent.
Fix:
- Add hasEmittedRunFinished / hasEmittedTextMessageEnd guards to AGUIState
- Accumulate usage from any finishReason chunk into deferredUsage
- Move RUN_FINISHED emission to after the stream loop so it always
carries the latest usage data (even when it arrives on a later chunk)
Adds tests for duplicate-finish-chunk scenarios, usage preservation, and
event ordering.
* fix(ai-openrouter, ai): emit single STEP_FINISHED per reasoning block, remove [DONE] sentinel
STEP_FINISHED was emitted on every reasoning delta (N events for N
deltas) but only one STEP_STARTED was emitted, causing verifiers to
report orphan STEP_FINISHED events. Move the single STEP_FINISHED to
the point where reasoning closes (before text starts or at stream end)
so every STEP_STARTED has exactly one matching STEP_FINISHED.
Remove the `data: [DONE]\n\n` sentinel from toServerSentEventsStream.
The AG-UI protocol already uses RUN_FINISHED as the terminal event, so
the [DONE] marker is redundant and forces every client to special-case
non-JSON data in the SSE stream. Client-side parsers still tolerate
[DONE] for backward compatibility with external servers.
* fix(ai-client): warn when receiving deprecated [DONE] sentinel
Old servers still emit `data: [DONE]\n\n` after the stream. The client
already skips it, but now logs a deprecation warning so users know to
upgrade their @tanstack/ai server package.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com> 91ec2053 feat: AG-UI core interop - spec-compliant event types (#411)
* feat(ai): add @ag-ui/core as dependency for spec-compliant event types
* feat(ai): redefine AG-UI event types extending @ag-ui/core
Replace custom AG-UI event types with interfaces that extend @ag-ui/core
types for spec compliance. This is the foundational type change for the
AG-UI protocol alignment.
- Import all event types from @ag-ui/core with AGUI* aliases
- Replace BaseAGUIEvent to extend @ag-ui/core BaseEvent
- Replace each event interface to extend its @ag-ui/core equivalent
- Add TanStack-internal extension fields (model, deprecated aliases)
- Add new event types: ToolCallResultEvent, Reasoning* events
- Deprecate AGUIEventType in favor of EventType enum
- Re-export EventType enum from @ag-ui/core
- Add threadId/runId to TextOptions interface
- Update AGUIEvent union and StreamChunk type alias
* feat(ai): add stripToSpec middleware to strip non-spec fields from stream events
Creates a middleware that removes TanStack-internal extension fields
(model, rawEvent, deprecated aliases) from StreamChunk events so the
yielded stream is @ag-ui/core spec-compliant. Registered as the last
middleware in the chat activity chain so devtools and user middleware
still see the full extended events.
* feat(ai): plumb threadId and runId through chat() to adapters
Add threadId/runId to TextActivityOptions interface and TextEngine class
so they flow from user-facing chat() options through to adapter.chatStream().
ThreadId is auto-generated if not provided. Adapters will consume these
in subsequent tasks to include them in RUN_STARTED/RUN_FINISHED events.
* feat(ai-openai): update text adapter for AG-UI spec compliance
Add threadId to RUN_STARTED/RUN_FINISHED events, toolCallName to
TOOL_CALL_START/TOOL_CALL_END, stepName to STEP_STARTED/STEP_FINISHED,
flatten RUN_ERROR with top-level message/code fields, and emit
REASONING_START/MESSAGE_START/CONTENT/MESSAGE_END/END events alongside
legacy STEP events for reasoning content.
* fix: update tests and fix type errors for AG-UI spec compliance
Update test utilities and tests to use AG-UI spec field names:
- Add threadId to RUN_STARTED/RUN_FINISHED events
- Add toolCallName alongside deprecated toolName on tool events
- Add stepName alongside deprecated stepId on step events
- Use flat message field on RUN_ERROR (with deprecated error nested form)
Fix critical bugs discovered during testing:
- StreamProcessor: prefer chunk.message over chunk.error?.message for RUN_ERROR
- TextEngine: process original chunks for internal state before middleware strips fields
- Remove auto-applied stripToSpecMiddleware from chat() (breaks internal state since
it strips finishReason, delta, content needed by TextEngine and StreamProcessor)
- Fix type compatibility issues with @ag-ui/core EventType enum vs string literals
Also fix type errors in:
- stream-generation-result.ts: use EventType enum and add threadId
- generateVideo/index.ts: add StreamChunk casts and threadId
- tool-calls.ts: cast TOOL_CALL_END yield to ToolCallEndEvent
- devtools-middleware.ts: handle toolCallName fallback and RUN_ERROR message field
- processor.ts: handle developer role, Messages snapshot type cast, finishReason undefined
* fix(ai): re-add stripToSpec middleware, process raw chunks internally, fix test assertions
* fix(ai): pipe tool-phase events through middleware, strip toolCallName, fix type errors
- Fix EventType enum vs string literal type errors in test files by
relaxing chunk helper type params and adding cast helpers
- Pipe tool-phase events (TOOL_CALL_END, TOOL_CALL_RESULT, CUSTOM)
through the middleware pipeline so strip-to-spec and devtools
middleware observe all events, not just model-stream events
- Add toolCallName to TOOL_CALL_END strip set in strip-to-spec
middleware since AG-UI spec ToolCallEndEvent only has toolCallId
- Update test assertions to use TOOL_CALL_RESULT (spec event) instead
of checking stripped fields on TOOL_CALL_END
* style: format test files
* test(ai): add tests for REASONING events, TOOL_CALL_RESULT, threadId, and strip compliance
* fix: resolve eslint, type, and test failures across all packages
- Fix 5 ESLint errors in @tanstack/ai (array-type, no-unnecessary-condition,
no-unnecessary-type-assertion, sort-imports)
- Fix ESLint error in @tanstack/ai-event-client (no-unnecessary-condition)
- Fix string literal vs EventType enum type errors across all 7 adapter
packages by adding asChunk helper that casts event objects to StreamChunk
- Fix @tanstack/ai-client source type errors (chunk.error possibly undefined,
runId access on RUN_ERROR events, connection-adapters push calls)
- Fix @tanstack/ai-client and @tanstack/ai-openrouter test type errors
- Fix tool-call-manager tests to use toolCallName instead of deprecated toolName
* fix: resolve eslint and test failures from strip-to-spec middleware
Remove assertions for fields (content, finishReason, usage) that the
stripToSpec middleware now strips from emitted events. Fix unnecessary
nullish coalescing in ai-openai and add type casts in ai-vue tests.
* fix: honor caller runId, prevent duplicate thinking, add error IDs, fix reasoning ordering
- Honor caller-provided runId/threadId in all 7 adapters using ?? fallback
- Prevent duplicate thinking content from dual STEP_FINISHED/REASONING_MESSAGE_CONTENT events
- Assert exact threadId value in chat test instead of just toBeDefined
- Add runId/threadId to RUN_ERROR in generateVideo and stream-generation-result
- Move reasoning processing before content processing in OpenRouter adapter
* fix(smoke-tests): update harness to read spec-compliant event fields
TOOL_CALL_START now uses toolCallName (spec) instead of toolName (deprecated).
TOOL_CALL_END fields (toolName, input, result) are stripped by spec middleware;
harness now falls back to data captured during START/ARGS phases.
Added TOOL_CALL_RESULT handler for spec-compliant tool result delivery.
RUN_FINISHED finishReason/usage are optional extensions.
* ci: apply automated fixes
* fix(smoke-tests): remove unnecessary as-any casts, use proper type narrowing
* ci: apply automated fixes
* fix(ai-ollama): emit TOOL_CALL_ARGS before TOOL_CALL_END for spec compliance
Ollama doesn't stream tool args incrementally — it delivers them all at once
in TOOL_CALL_END.input. Since the strip middleware removes input from
TOOL_CALL_END, consumers had no way to get the args. Now emits a
TOOL_CALL_ARGS event with the full args as delta before TOOL_CALL_END.
* fix(ai): hide strip-to-spec middleware from devtools instrumentation
* fix(ai): handle TOOL_CALL_RESULT in StreamProcessor to create tool-result parts
Root cause: The strip middleware removes 'result' from TOOL_CALL_END events.
The StreamProcessor's TOOL_CALL_END handler only creates tool-result parts
when chunk.result is present. With it stripped, no tool-result parts were
created on the client side.
TOOL_CALL_RESULT events (spec-compliant tool result delivery) were received
but ignored (no-op). Without tool-result parts, areAllToolsComplete() behaved
incorrectly, and the client could not detect server tool completion.
Fix: Handle TOOL_CALL_RESULT by creating tool-result parts and updating
tool-call output, mirroring TOOL_CALL_END's result handling logic.
* fix(ai): stop stripping finishReason from RUN_FINISHED events
finishReason is essential for client-side continuation logic. Without it,
the chat-client cannot distinguish 'stop' (no continuation needed) from
'tool_calls' (client tools need execution), causing infinite request loops
when server-side tool results leave tool-call parts as the last message part.
* fix(ai): only strip deprecated aliases and rawEvent, keep all extras
@ag-ui/core BaseEventSchema uses .passthrough(), so extra fields are allowed
and won't break spec validation. Only strip:
- Deprecated aliases: toolName, stepId, state (nudge toward spec names)
- Deprecated nested error object on RUN_ERROR
- rawEvent (debug payload, potentially large)
Keep everything else: model, content, args, usage, finishReason, input,
result, index, providerMetadata, stepType, delta, etc.
* ci: apply automated fixes
* fix(ai): stop stripping fields — passthrough allows all extras
@ag-ui/core BaseEventSchema uses .passthrough() so extra fields are allowed.
Only strip the deprecated nested error object from RUN_ERROR (conflicts with
spec's flat message/code). Everything else passes through: model, content,
toolName, stepId, usage, finishReason, result, input, args, etc.
* fix: resolve type errors from @ag-ui/core Zod passthrough types
Zod passthrough adds `& { [k: string]: unknown }` to inferred types,
preventing TypeScript from narrowing the `type` discriminant in switch
statements. Add explicit casts where needed. Also fix toolCallName ->
toolName rename in realtime types to match consumer code.
* ci: apply automated fixes
* chore(ai): bump @ag-ui/core from 0.0.48 to 0.0.49
Removes rxjs from the transitive dependency tree. All exported types
and EventType enum values are identical between versions.
* fix: CR fixes for AG-UI core interop
- Use this.threadId in createSyntheticFinishedEvent instead of
regenerating a new ID on each call
- Add defensive delta guard in handleReasoningMessageContentEvent
matching sibling handler patterns
- Prefer spec chunk.message over deprecated chunk.error in devtools
middleware, generation client, and video generation client
- Add flat message field to synthesized RUN_ERROR in connection adapters
- Fix processChunk JSDoc listing RUN_STARTED as ignored (it has a handler)
- Fix comment referencing toolName when code uses toolCallName
- Document RUN_ERROR in stream-generation-result @returns
- Add meaningful assertions to TOOL_CALL_RESULT processor test
- Clarify threadId test describes adapter passthrough behavior
* ci: apply automated fixes
* fix(ai-client): use cast for RUN_ERROR message to satisfy eslint
chunk.message is typed as required string by @ag-ui/core but may be
absent at runtime from events constructed via as-unknown casts.
Cast to string|undefined to allow the || fallback chain while keeping
the no-unnecessary-condition rule happy.
* fix(ai): add StreamChunk casts for TOOL_CALL_START/ARGS in continuation re-executions
The merge from main brought in #372 which emits these events but used
string literals instead of the AGUIEvent enum, breaking the build.
* fix(ai-openrouter): prevent duplicate TEXT_MESSAGE_END and RUN_FINISHED events
OpenAI-compatible APIs often send two chunks with finishReason — one for
the finish signal and a separate trailing chunk carrying usage data. The
adapter had no guard against this, causing TEXT_MESSAGE_END and
RUN_FINISHED to be emitted twice per run.
Root cause: processChoice emitted finish events on every finishReason
occurrence without tracking whether they had already been sent.
Fix:
- Add hasEmittedRunFinished / hasEmittedTextMessageEnd guards to AGUIState
- Accumulate usage from any finishReason chunk into deferredUsage
- Move RUN_FINISHED emission to after the stream loop so it always
carries the latest usage data (even when it arrives on a later chunk)
Adds tests for duplicate-finish-chunk scenarios, usage preservation, and
event ordering.
* fix(ai-openrouter, ai): emit single STEP_FINISHED per reasoning block, remove [DONE] sentinel
STEP_FINISHED was emitted on every reasoning delta (N events for N
deltas) but only one STEP_STARTED was emitted, causing verifiers to
report orphan STEP_FINISHED events. Move the single STEP_FINISHED to
the point where reasoning closes (before text starts or at stream end)
so every STEP_STARTED has exactly one matching STEP_FINISHED.
Remove the `data: [DONE]\n\n` sentinel from toServerSentEventsStream.
The AG-UI protocol already uses RUN_FINISHED as the terminal event, so
the [DONE] marker is redundant and forces every client to special-case
non-JSON data in the SSE stream. Client-side parsers still tolerate
[DONE] for backward compatibility with external servers.
* fix(ai-client): warn when receiving deprecated [DONE] sentinel
Old servers still emit `data: [DONE]\n\n` after the stream. The client
already skips it, but now logs a deprecation warning so users know to
upgrade their @tanstack/ai server package.
---------
Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>