fix(ai-sdk): support AI SDK v6 native tool approval flow#15345
fix(ai-sdk): support AI SDK v6 native tool approval flow#15345TheIsrael1 wants to merge 4 commits intomainfrom
Conversation
- convertMastraChunkToAISDKv6 now emits tool-approval-request (v6 native)
alongside data-tool-call-approval (backwards compat) for tool-call-approval chunks
- handleChatStream auto-detects AI SDK v6 approve() submissions and routes
to resumeStream instead of stream
- approvalId is encoded as "${runId}::${toolCallId}" so the server recovers
the runId without a DB lookup
- Uses lastIndexOf for safe separator parsing
Fixes #14818
Fixes #15268
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
🦋 Changeset detectedLatest commit: fd384fd The changes in this PR will be included in the next version bump. This PR includes changesets to release 4 packages
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
WalkthroughEmits v6 native Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.changeset/seven-suns-dress.md:
- Line 5: The changeset text references backwards compatibility with
`@mastra/react` which is not in the frontmatter; update the entry so it only
describes effects for `@mastra/ai-sdk` (e.g., “handleChatStream now routes to
resumeStream when client.approve() is used, and the v6 stream emits native
tool-approval-request parts in addition to data-tool-call-approval for
`@mastra/ai-sdk` consumers”) or create a separate changeset describing the
`@mastra/react` change; ensure references to handleChatStream, resumeStream,
approve(), tool-approval-request and data-tool-call-approval remain accurate but
do not mention packages not listed in the frontmatter.
In `@client-sdks/ai-sdk/src/chat-route.ts`:
- Around line 33-58: extractV6NativeApproval currently scans the entire
conversation and can pick up old "approval-responded" parts; change it to only
inspect the trailing assistant turn that approve() re-submits: find the last
message with message.role === 'assistant' (from the end), then iterate that
single message.parts from the end and return the resumeData/runId when
encountering a part where isToolUIPart(part) && part.state ===
'approval-responded'; this prevents handleChatStream/resumeStream from resuming
on historical approvals.
In `@client-sdks/ai-sdk/src/transformers.ts`:
- Around line 281-327: transformWorkflow currently assumes
convertMastraChunkToAISDK returns a single part and thus drops additional parts
from v6 chunks; update the workflow transformer to accept and handle an array of
parts from convertMastraChunkToAISDK (or flatten its result) so each returned
part is converted/enqueued (e.g., in the workflow-step-output handling inside
transformWorkflow, iterate over the array returned by convertMastraChunkToAISDK
and call the same conversion/enqueue logic for each element), ensuring
tool-call-approval v6 chunks produce both tool-approval-request and the legacy
data chunk; reference transformWorkflow and convertMastraChunkToAISDK in your
changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: f27d3a41-28fd-42e6-9269-733f94925b60
📒 Files selected for processing (6)
.changeset/seven-suns-dress.md.changeset/tired-comics-cover.mdclient-sdks/ai-sdk/src/__tests__/tool-call-approval.test.tsclient-sdks/ai-sdk/src/chat-route.tsclient-sdks/ai-sdk/src/helpers.tsclient-sdks/ai-sdk/src/transformers.ts
- extractV6NativeApproval now only inspects the last trailing assistant message so approval-responded parts from earlier turns are never re-processed - transformWorkflow workflow-step-output case now handles array returns from convertMastraChunkToAISDK (triggered by tool-call-approval v6 chunks); update both callers to enqueue each element of the array - changeset: drop reference to @mastra/react which is not in frontmatter Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@client-sdks/ai-sdk/src/transformers.ts`:
- Around line 740-754: The branch handling Array.isArray(part) can return nested
arrays because convertFullStreamChunkToUIMessageStream may expand a single part
into multiple UI chunks; change the map(...).filter(Boolean) pipeline to produce
a flattened array (e.g., use flatMap to call
convertFullStreamChunkToUIMessageStream for each p and flatten the results, then
filter falsy values) so the returned value contains only UI chunks (no inner
arrays). Keep the same arguments and onError callback (safeParseErrorObject)
when calling convertFullStreamChunkToUIMessageStream and ensure the final result
type matches the surrounding convertMastraChunkToAISDK / transformers.ts
expectations.
- Around line 287-328: The code assumes
convertFullStreamChunkToUIMessageStream(...) and transformNetwork(...) return
single chunks, but they may return arrays; update enqueueTransformedPart to
flatten any array-returning helpers: if transformedChunk is an array, iterate
over each element and apply the same branching logic (check .type and call
transformAgent, transformWorkflow, transformNetwork or controller.enqueue for
each), and similarly if transformNetwork(...) returns an array iterate and
enqueue each item rather than enqueuing the array itself; ensure you reuse the
existing branches for 'tool-agent', 'tool-workflow', 'tool-network' so behavior
stays consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 84e426bd-e5af-4c8a-8861-cb2eaeebd758
📒 Files selected for processing (3)
.changeset/seven-suns-dress.mdclient-sdks/ai-sdk/src/chat-route.tsclient-sdks/ai-sdk/src/transformers.ts
✅ Files skipped from review due to trivial changes (1)
- .changeset/seven-suns-dress.md
🚧 Files skipped from review as they are similar to previous changes (1)
- client-sdks/ai-sdk/src/chat-route.ts
…sformedPart transformNetwork returns TransformNetworkResult[] in the network-execution-event-step-finish case; previously the array was passed directly to controller.enqueue. Now treated consistently with the workflow branch. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
♻️ Duplicate comments (2)
client-sdks/ai-sdk/src/transformers.ts (2)
287-336:⚠️ Potential issue | 🟠 MajorFlatten
convertFullStreamChunkToUIMessageStreamresults before branching on.type.
enqueueTransformedPartstill assumes a single transformed chunk. If the converter returns an array, it gets enqueued as one invalid chunk payload instead of fan-out processing.Suggested fix
const enqueueTransformedPart = (p: any) => { - const transformedChunk = convertFullStreamChunkToUIMessageStream<any>({ + const transformedChunk = convertFullStreamChunkToUIMessageStream<any>({ part: p as any, sendReasoning, sendSources, messageMetadataValue: p ? messageMetadata?.({ part: p as TextStreamPart<ToolSet> }) : undefined, sendStart, @@ - if (transformedChunk) { - if (transformedChunk.type === 'tool-agent') { - const payload = transformedChunk.payload; + const transformedChunks = Array.isArray(transformedChunk) + ? transformedChunk + : transformedChunk + ? [transformedChunk] + : []; + + for (const chunk of transformedChunks) { + if (chunk.type === 'tool-agent') { + const payload = chunk.payload; const agentTransformed = transformAgent<OUTPUT>(payload, bufferedSteps); if (agentTransformed) controller.enqueue(agentTransformed); - } else if (transformedChunk.type === 'tool-workflow') { - const payload = transformedChunk.payload; + } else if (chunk.type === 'tool-workflow') { + const payload = chunk.payload; const workflowChunk = transformWorkflow( payload, bufferedSteps, @@ - } else if (transformedChunk.type === 'tool-network') { - const payload = transformedChunk.payload; + } else if (chunk.type === 'tool-network') { + const payload = chunk.payload; const networkChunk = transformNetwork(payload, bufferedSteps, true); if (Array.isArray(networkChunk)) { for (const c of networkChunk) { if (c) controller.enqueue(c); } } else if (networkChunk) { controller.enqueue(networkChunk); } } else { - controller.enqueue(transformedChunk as any); + controller.enqueue(chunk as any); } } - } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client-sdks/ai-sdk/src/transformers.ts` around lines 287 - 336, enqueueTransformedPart assumes convertFullStreamChunkToUIMessageStream returns a single object but it may return an array; change the logic to first normalize/flatten the result of convertFullStreamChunkToUIMessageStream (when called in enqueueTransformedPart) into an array (e.g., wrap non-array into a single-element array), then iterate each transformed item and perform the existing branching on item.type (handling 'tool-agent' via transformAgent, 'tool-workflow' via transformWorkflow, 'tool-network' via transformNetwork, and the default case) enqueuing each resulting message via controller.enqueue; ensure onError handling and existing parameters (sendReasoning, sendSources, messageMetadataValue, sendStart, sendFinish, responseMessageId) are preserved when creating/processing each item.
746-760:⚠️ Potential issue | 🟠 Major
map(...).filter(Boolean)still returns nested arrays in the array-part branch.When
convertFullStreamChunkToUIMessageStream(...)returns arrays, this code returnsArray<(chunk | chunk[])>and inner arrays leak to downstream enqueue paths.Suggested fix
if (Array.isArray(part)) { - return part - .map(p => - convertFullStreamChunkToUIMessageStream({ - part: p as any, - sendReasoning: streamOptions?.sendReasoning, - sendSources: streamOptions?.sendSources, - onError(error) { - return safeParseErrorObject(error); - }, - }), - ) - .filter(Boolean); + return part.flatMap(p => { + const transformed = convertFullStreamChunkToUIMessageStream({ + part: p as any, + sendReasoning: streamOptions?.sendReasoning, + sendSources: streamOptions?.sendSources, + onError(error) { + return safeParseErrorObject(error); + }, + }); + + return Array.isArray(transformed) ? transformed.filter(Boolean) : transformed ? [transformed] : []; + }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@client-sdks/ai-sdk/src/transformers.ts` around lines 746 - 760, The branch handling Array.isArray(part) can produce nested arrays because convertFullStreamChunkToUIMessageStream may return arrays; update this branch to flatten the results (e.g., use flatMap or map + flat) so it returns a single-level Array of chunks before filtering. Locate the mapping that calls convertFullStreamChunkToUIMessageStream (the block that passes part: p, sendReasoning/sendSources, onError) and replace the map(...).filter(Boolean) pattern with a flattening approach and then filter(Boolean) to ensure downstream enqueue paths receive only non-nested chunk items.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@client-sdks/ai-sdk/src/transformers.ts`:
- Around line 287-336: enqueueTransformedPart assumes
convertFullStreamChunkToUIMessageStream returns a single object but it may
return an array; change the logic to first normalize/flatten the result of
convertFullStreamChunkToUIMessageStream (when called in enqueueTransformedPart)
into an array (e.g., wrap non-array into a single-element array), then iterate
each transformed item and perform the existing branching on item.type (handling
'tool-agent' via transformAgent, 'tool-workflow' via transformWorkflow,
'tool-network' via transformNetwork, and the default case) enqueuing each
resulting message via controller.enqueue; ensure onError handling and existing
parameters (sendReasoning, sendSources, messageMetadataValue, sendStart,
sendFinish, responseMessageId) are preserved when creating/processing each item.
- Around line 746-760: The branch handling Array.isArray(part) can produce
nested arrays because convertFullStreamChunkToUIMessageStream may return arrays;
update this branch to flatten the results (e.g., use flatMap or map + flat) so
it returns a single-level Array of chunks before filtering. Locate the mapping
that calls convertFullStreamChunkToUIMessageStream (the block that passes part:
p, sendReasoning/sendSources, onError) and replace the map(...).filter(Boolean)
pattern with a flattening approach and then filter(Boolean) to ensure downstream
enqueue paths receive only non-nested chunk items.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e0c19ae5-d52c-45ae-86da-5fc1cb3178fc
📒 Files selected for processing (1)
client-sdks/ai-sdk/src/transformers.ts
Fixes #14818 and #15268.
Two related problems with tool call approvals in AI SDK v6:
1. handleChatStream didn't detect native approve() calls
When the client uses AI SDK v6's
approve()method, the SDK re-submits the conversation with anapproval-respondedpart in the messages.handleChatStreamwas treating this as a normal chat turn and callingstream()instead ofresumeStream().Now
handleChatStreamscans the incoming messages forapproval-respondedparts, extracts therunIdfrom the compositeapprovalId("${runId}::${toolCallId}"), and routes toresumeStreamautomatically — no extra wiring needed in user code.2. v6 stream wasn't emitting native tool-approval-request parts
convertMastraChunkToAISDKv6was routingtool-call-approvalthrough the base converter which only produceddata-tool-call-approval. AI SDK v6'suseChatneeds atool-approval-requestpart to wire upapprove()on the client side.Now for v6, both are emitted:
tool-approval-request(for nativeapprove()support) anddata-tool-call-approval(backwards compat with@mastra/reacthooks that readrunIdfrom the data chunk).If you're on v6 and using
useChat, tool approvals now just work without needing to manually post to/approve-tool-call.ELI5
This PR fixes the approval flow so that when a user clicks the built‑in "approve" button in AI SDK v6 chat, the conversation correctly resumes and the SDK emits the right approval messages—no extra server wiring required.
Overview
Fixes tool-call approval handling in AI SDK v6 by:
Key Changes
Native V6 approval detection
Dual-emission of approval parts (v6 + legacy)
Stream conversion & transformer updates
Tests
Impact