Loading workspace insights... Statistics interval
7 days30 daysLatest CI Pipeline Executions
3681c9e8 Create migration guide from AI SDK to TanStack (#179)
* docs: add migration guide from Vercel AI SDK to TanStack AI
Add comprehensive migration guide covering:
- Package installation differences
- Server-side API migration (streamText -> chat)
- Client-side useChat hook differences
- Isomorphic tool system migration
- Provider adapter changes (OpenAI, Anthropic, Gemini)
- Streaming response formats
- Multimodal content handling
- Type safety enhancements
- Complete before/after code examples
* docs: address review feedback on Vercel AI migration guide
- Installation: add @ai-sdk/react to v5+ deps and update quick-reference
table to show the v5 framework package names
- System prompts: use root-level systemPrompts: [...] instead of
prepending a system message to the messages array (verified against
packages/typescript/ai/src/types.ts)
- useChat API table: rewrite against current Vercel AI SDK v5+ API
(sendMessage, status, regenerate, DefaultChatTransport) so the
comparison is accurate rather than mixing v4/v5
- MessagePart: expand to full discriminated union with real field names
(arguments/input/approval on tool-call, content on tool-result) and
real ToolCallState values
- Fix nonexistent toStreamResponse references -> toServerSentEventsResponse
(and add toHttpResponse where appropriate)
- Fix AbortController section heading (h4 -> h3, resolves MD001)
- Update tool schema section to note parameters -> inputSchema rename
in AI SDK v5
- Tighten tool approval example with optional chaining and a note on
arguments vs parsed input
* docs: expand Vercel AI SDK migration guide with full option + v6 coverage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.)
* docs: round-1 CR fixes on Vercel AI migration guide
Factual corrections verified against source:
- Multimodal image source shape uses { type: 'url'|'data', value, mimeType }
not { url, base64, mediaType } (types.ts:142-183)
- toolCacheMiddleware is exported from @tanstack/ai/middlewares, not the
root (packages/typescript/ai/src/middlewares/index.ts)
- toolDefinition({ description }) is required; add it to the two doc
examples that were missing it (tool-definition.ts:31)
- stream() connection adapter factory is (messages, data?) with no
signal arg; rewrite custom-adapter example (connection-adapters.ts:441)
AI SDK v6 accuracy:
- addToolResult -> addToolOutput (v6 rename)
- experimental_output -> output (de-experimentalized)
- Soften "replaced" claim about generateObject/streamObject — they are
deprecated, not removed
- Vercel addToolApprovalResponse row: v6 has this; replace "N/A"
- First Basic Text Generation Before example now uses v5+ API
(convertToModelMessages + toUIMessageStreamResponse) with a v4
toDataStreamResponse callout
Consistency:
- Agent-loop tables reconciled: only one truly built-in strategy
(maxIterations / untilFinishReason / combineStrategies); hasToolCall
requires a custom AgentLoopStrategy. Both tables now agree.
- prepareStep Before/After actually demonstrates equivalent behavior:
Before shows step-level config tweak, After uses onConfig;
mid-loop model switching split into its own subsection with the
two-chat pattern the prose describes
- Message Structure section qualifies that ToolCallPart.input is the
ai-client projection (server-side reads arguments directly)
- toHttpStream/Response comment in client connection example clarified
- Complete Example clarifies why convertToModelMessages disappears in
the After (chat() accepts UI messages directly)
- clientTools() auto-execution comment expanded to state that no
onToolCall/addToolOutput call is needed
- Anchor slug for Structured Output simplified to #structured-output
Rot hygiene:
- "current releases" removed from v5/v6 note
- "Every option" softened to "Options accepted ... as of AI SDK v6"
- "now expose" / "AI SDK v6 offers" / "v6 consolidated" reworded to
avoid tense decay across future releases
* docs: use toServerSentEventsResponse/toHttpResponse init options
Both helpers accept ResponseInit & { abortController }, so custom headers,
status, and cancellation flow through the helpers directly. Drop the
hand-rolled `new Response(toServerSentEventsStream(...), { headers: {...} })`
example and keep the raw stream helpers only for the genuine "pipe elsewhere"
case.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Co-authored-by: Jack Herrington <jherr@pobox.com> 3681c9e8 Create migration guide from AI SDK to TanStack (#179)
* docs: add migration guide from Vercel AI SDK to TanStack AI
Add comprehensive migration guide covering:
- Package installation differences
- Server-side API migration (streamText -> chat)
- Client-side useChat hook differences
- Isomorphic tool system migration
- Provider adapter changes (OpenAI, Anthropic, Gemini)
- Streaming response formats
- Multimodal content handling
- Type safety enhancements
- Complete before/after code examples
* docs: address review feedback on Vercel AI migration guide
- Installation: add @ai-sdk/react to v5+ deps and update quick-reference
table to show the v5 framework package names
- System prompts: use root-level systemPrompts: [...] instead of
prepending a system message to the messages array (verified against
packages/typescript/ai/src/types.ts)
- useChat API table: rewrite against current Vercel AI SDK v5+ API
(sendMessage, status, regenerate, DefaultChatTransport) so the
comparison is accurate rather than mixing v4/v5
- MessagePart: expand to full discriminated union with real field names
(arguments/input/approval on tool-call, content on tool-result) and
real ToolCallState values
- Fix nonexistent toStreamResponse references -> toServerSentEventsResponse
(and add toHttpResponse where appropriate)
- Fix AbortController section heading (h4 -> h3, resolves MD001)
- Update tool schema section to note parameters -> inputSchema rename
in AI SDK v5
- Tighten tool approval example with optional chaining and a note on
arguments vs parsed input
* docs: expand Vercel AI SDK migration guide with full option + v6 coverage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.)
* docs: round-1 CR fixes on Vercel AI migration guide
Factual corrections verified against source:
- Multimodal image source shape uses { type: 'url'|'data', value, mimeType }
not { url, base64, mediaType } (types.ts:142-183)
- toolCacheMiddleware is exported from @tanstack/ai/middlewares, not the
root (packages/typescript/ai/src/middlewares/index.ts)
- toolDefinition({ description }) is required; add it to the two doc
examples that were missing it (tool-definition.ts:31)
- stream() connection adapter factory is (messages, data?) with no
signal arg; rewrite custom-adapter example (connection-adapters.ts:441)
AI SDK v6 accuracy:
- addToolResult -> addToolOutput (v6 rename)
- experimental_output -> output (de-experimentalized)
- Soften "replaced" claim about generateObject/streamObject — they are
deprecated, not removed
- Vercel addToolApprovalResponse row: v6 has this; replace "N/A"
- First Basic Text Generation Before example now uses v5+ API
(convertToModelMessages + toUIMessageStreamResponse) with a v4
toDataStreamResponse callout
Consistency:
- Agent-loop tables reconciled: only one truly built-in strategy
(maxIterations / untilFinishReason / combineStrategies); hasToolCall
requires a custom AgentLoopStrategy. Both tables now agree.
- prepareStep Before/After actually demonstrates equivalent behavior:
Before shows step-level config tweak, After uses onConfig;
mid-loop model switching split into its own subsection with the
two-chat pattern the prose describes
- Message Structure section qualifies that ToolCallPart.input is the
ai-client projection (server-side reads arguments directly)
- toHttpStream/Response comment in client connection example clarified
- Complete Example clarifies why convertToModelMessages disappears in
the After (chat() accepts UI messages directly)
- clientTools() auto-execution comment expanded to state that no
onToolCall/addToolOutput call is needed
- Anchor slug for Structured Output simplified to #structured-output
Rot hygiene:
- "current releases" removed from v5/v6 note
- "Every option" softened to "Options accepted ... as of AI SDK v6"
- "now expose" / "AI SDK v6 offers" / "v6 consolidated" reworded to
avoid tense decay across future releases
* docs: use toServerSentEventsResponse/toHttpResponse init options
Both helpers accept ResponseInit & { abortController }, so custom headers,
status, and cancellation flow through the helpers directly. Drop the
hand-rolled `new Response(toServerSentEventsStream(...), { headers: {...} })`
example and keep the raw stream helpers only for the genuine "pipe elsewhere"
case.
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Alem Tuzlak <t.zlak@hotmail.com>
Co-authored-by: Jack Herrington <jherr@pobox.com> 6b69e96e docs: expand Vercel AI SDK migration guide with full option + v6 coverage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.) 6b69e96e docs: expand Vercel AI SDK migration guide with full option + v6 coverage
- Add exhaustive streamText -> chat() option mapping table covering
every AI SDK v6 parameter (tools, toolChoice, activeTools, stopWhen,
prepareStep, experimental_transform/context/telemetry/repairToolCall,
all sampling controls, abort, headers, providerOptions -> modelOptions)
- Add streamText result -> TanStack equivalent table (textStream,
fullStream, text, usage, finishReason, steps, toUIMessageStreamResponse,
pipeTextStreamToResponse, consumeStream, etc.)
- Expand Generation Options with topK/presence/frequency/seed/stop under
modelOptions, clarify flat typed modelOptions vs provider-keyed
providerOptions
- New section: Structured Output (generateObject / streamObject / v6
Output.object) -> outputSchema on chat(); notes on Standard Schema
libraries, provider strategies, and the current gap for partial
object streaming
- New section: Agent Loop Control — stopWhen / hasToolCall / stepCountIs
mapped to maxIterations / untilFinishReason / combineStrategies, and
prepareStep mapped to middleware onConfig/onIteration
- New section: Middleware — wrapLanguageModel + experimental_transform
mapped to a single ChatMiddleware array; full hook inventory;
toolCacheMiddleware usage; common-pattern mapping table
- New section: Observability — where to plug logging/metrics/tracing
- Update generateText coverage to chat({ stream: false }) returning a
real Promise<string> (not just streamToText)
- Update Tool Approval "Before" to show AI SDK v6's native needsApproval
+ sendAutomaticallyWhen flow; the two APIs are now symmetric
- Reframe "Removed Features" -> "Features Not Yet Covered" and scope
it to embeddings, partial-object streaming, built-in retries/timeouts
- Update frontmatter for the docs/migration/ location (order, description,
keywords); fix cross-links to the new directory layout
(../advanced/middleware, ../chat/structured-outputs, etc.)