fix(backend): auto-migrate webhook presets to new agent version on pu…#12753
fix(backend): auto-migrate webhook presets to new agent version on pu…#12753abderbejaoui wants to merge 2 commits intoSignificant-Gravitas:devfrom
Conversation
…blish
When publishing a new agent version, webhook trigger URLs remained pinned
to the old version, forcing users to manually reconfigure external
integrations (e.g. Telegram bots) on every update.
Root cause: AgentPreset.agentGraphVersion was set at creation time and
never updated when a new graph version was activated.
This adds migrate_webhook_presets_to_new_version() which automatically
updates all webhook-attached presets to point to the new version when:
- A new graph version is published (PUT /graphs/{id})
- The active version is changed (PUT /graphs/{id}/versions/active)
- A graph is updated via the library code path
The migration only runs when the new version still has a webhook trigger
node, and only affects non-deleted, webhook-attached presets owned by the
publishing user.
Closes Significant-Gravitas#11679
|
This PR targets the Automatically setting the base branch to |
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📜 Recent review details⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
🧰 Additional context used📓 Path-based instructions (4)autogpt_platform/backend/**/*.py📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Files:
autogpt_platform/backend/backend/api/features/**/*.py📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Files:
autogpt_platform/{backend,autogpt_libs}/**/*.py📄 CodeRabbit inference engine (AGENTS.md)
Files:
autogpt_platform/backend/**/api/**/*.py📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)
Files:
🧠 Learnings (7)📓 Common learnings📚 Learning: 2026-03-31T14:22:29.127ZApplied to files:
📚 Learning: 2026-03-05T00:13:36.338ZApplied to files:
📚 Learning: 2026-02-26T17:02:22.448ZApplied to files:
📚 Learning: 2026-03-05T15:42:08.207ZApplied to files:
📚 Learning: 2026-03-16T16:35:40.236ZApplied to files:
📚 Learning: 2026-03-31T15:37:38.626ZApplied to files:
🔇 Additional comments (2)
WalkthroughWhen a new agent graph version is activated and the graph contains a webhook input node, the code bulk-updates existing Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant API as "v1 API"
participant LibDB as "library/db"
participant DB as "AgentPreset DB"
User->>API: Request graph publish/update
API->>LibDB: update_graph_in_library(...) / set_graph_active_version(...)
LibDB->>DB: Create/activate new graph version
LibDB-->>API: Return new graph version
API->>API: Check new_active_graph.webhook_input_node
alt webhook_input_node present
API->>LibDB: migrate_webhook_presets_to_new_version(userId, graphId, version)
LibDB->>DB: UPDATE AgentPreset WHERE userId=X AND agentGraphId=Y AND agentGraphVersion≠version AND webhookId IS NOT NULL AND isDeleted=False SET agentGraphVersion=version
DB-->>LibDB: return count
LibDB-->>API: migrated count
end
API-->>User: respond with updated version (and migration result)
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
autogpt_platform/backend/backend/api/features/v1.py (1)
922-929: Add route-level tests for migration invocation conditions.Nice wiring, but please add API/service-level tests to assert migration is called when
webhook_input_nodeexists and skipped when absent in both activation paths. Current coverage appears limited to the helper itself.Also applies to: 983-990
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@autogpt_platform/backend/backend/api/features/v1.py` around lines 922 - 929, Add API/service-level tests that exercise both activation flows to assert migrate_webhook_presets_to_new_version is invoked only when new_graph_version.webhook_input_node is truthy: write two tests per activation path (one where new_graph_version.webhook_input_node is present and one where it is absent) and mock/stub library_db.migrate_webhook_presets_to_new_version to verify it was called with user_id, graph_id and new_version when present and not called when absent; cover both places where this logic appears (the block referencing new_graph_version.webhook_input_node and the second occurrence around lines 983-990) so the test suite validates route-level behavior rather than only the helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@autogpt_platform/backend/backend/api/features/v1.py`:
- Around line 922-929: Add API/service-level tests that exercise both activation
flows to assert migrate_webhook_presets_to_new_version is invoked only when
new_graph_version.webhook_input_node is truthy: write two tests per activation
path (one where new_graph_version.webhook_input_node is present and one where it
is absent) and mock/stub library_db.migrate_webhook_presets_to_new_version to
verify it was called with user_id, graph_id and new_version when present and not
called when absent; cover both places where this logic appears (the block
referencing new_graph_version.webhook_input_node and the second occurrence
around lines 983-990) so the test suite validates route-level behavior rather
than only the helper.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7d0b4b76-be8a-4d3c-8aa0-96481c096c8c
📒 Files selected for processing (3)
autogpt_platform/backend/backend/api/features/library/db.pyautogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/test/test_migrate_webhook_presets.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: check API types
- GitHub Check: Seer Code Review
- GitHub Check: test (3.11)
- GitHub Check: test (3.13)
- GitHub Check: type-check (3.13)
- GitHub Check: test (3.12)
- GitHub Check: type-check (3.12)
- GitHub Check: Check PR Status
- GitHub Check: conflicts
- GitHub Check: end-to-end tests
- GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (6)
autogpt_platform/backend/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development
autogpt_platform/backend/**/*.py: Usepoetry run ...command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies likeopenpyxl
Use absolute imports withfrom backend.module import ...for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoidhasattr/getattr/isinstancefor type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no# type: ignore,# noqa,# pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use%sfor deferred interpolation indebuglog statements for efficiency; use f-strings elsewhere for readability (e.g.,logger.debug("Processing %s items", count)vslogger.info(f"Processing {count} items"))
Sanitize error paths by usingos.path.basename()in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Usetransaction=Truefor Redis pipelines to ensure atomicity on multi-step operations
Usemax(0, value)guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...
Files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/test/test_migrate_webhook_presets.pyautogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/backend/backend/api/features/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
Files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/{backend,autogpt_libs}/**/*.py
📄 CodeRabbit inference engine (AGENTS.md)
Format Python code with
poetry run format
Files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/test/test_migrate_webhook_presets.pyautogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/backend/**/api/**/*.py
📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)
autogpt_platform/backend/**/api/**/*.py: UseSecurity()instead ofDepends()for authentication dependencies to get proper OpenAPI security specification
Follow SSE (Server-Sent Events) protocol: usedata:lines for frontend-parsed events (must match Zod schema) and: commentlines for heartbeats/status
Files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/backend/**/test/**/*.py
📄 CodeRabbit inference engine (.github/copilot-instructions.md)
Use snapshot testing with '--snapshot-update' flag in backend tests when output changes; always review with 'git diff'
Files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
autogpt_platform/backend/**/test_*.py
📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)
Create a failing test first using
@pytest.mark.xfaildecorator (backend) when fixing a bug or adding a feature, then implement the fix and remove the xfail marker
Files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
🧠 Learnings (15)
📓 Common learnings
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12356
File: autogpt_platform/backend/backend/copilot/constants.py:9-12
Timestamp: 2026-03-10T08:39:22.025Z
Learning: In Significant-Gravitas/AutoGPT PR `#12356`, the `COPILOT_SYNTHETIC_ID_PREFIX = "copilot-"` check in `create_auto_approval_record` (human_review.py) is intentional and safe. The `graph_exec_id` passed to this function comes from server-side `PendingHumanReview` DB records (not from user input); the API only accepts `node_exec_id` from users. Synthetic `copilot-*` IDs are only ever created server-side in `run_block.py`. The prefix skip avoids a DB lookup for a `AgentGraphExecution` record that legitimately does not exist for CoPilot sessions, while `user_id` scoping is enforced at the auth layer and on the resulting auto-approval record.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12566
File: autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts:968-974
Timestamp: 2026-03-26T00:32:06.673Z
Learning: In Significant-Gravitas/AutoGPT, the admin-facing methods in `autogpt_platform/frontend/src/lib/autogpt-server-api/client.ts` (e.g., `addUserCredits`, `getUsersHistory`, `getUserRateLimit`, `resetUserRateLimit`) intentionally follow the legacy `BackendAPI` pattern with manually defined types in `autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts`. Migrating these admin endpoints to the generated OpenAPI hooks (`@/app/api/__generated__/endpoints/`) is a planned separate effort covering all admin endpoints together, not done piecemeal per PR. Do not flag individual admin type additions in `types.ts` as blocking issues.
📚 Learning: 2026-03-31T14:22:29.127Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12622
File: autogpt_platform/backend/backend/copilot/tools/agent_search.py:223-236
Timestamp: 2026-03-31T14:22:29.127Z
Learning: When reviewing code under autogpt_platform/backend/backend/copilot/tools/, the `AgentInfo.graph` field (in agent_search.py / models.py) uses `Graph | None` (the typed `backend.data.graph.Graph` Pydantic model), NOT `dict[str, Any]`. The enrichment function `_enrich_agents_with_graph` calls `graph_db().get_graph(graph_id, version=None, user_id=user_id)` directly rather than going through `get_agent_as_json()` / `graph_to_json()`. This was updated in PR `#12622` (commit 22d05bc).
Applied to files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-23T06:36:25.447Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/frontend/src/app/(platform)/library/components/LibraryImportWorkflowDialog/useLibraryImportWorkflowDialog.ts:0-0
Timestamp: 2026-03-23T06:36:25.447Z
Learning: In Significant-Gravitas/AutoGPT PR `#12440`, the `LibraryImportWorkflowDialog` (previously `LibraryImportCompetitorDialog`) and its associated generated API hook (`usePostV2ImportACompetitorWorkflowN8nMakeComZapier` / `usePostV2ImportAWorkflowFromAnotherToolN8nMakeComZapier`) were removed in a subsequent refactor. Workflow import from external platforms (n8n, Make.com, Zapier) now uses a server action `fetchWorkflowFromUrl` instead of direct API calls or generated orval hooks. Do not expect or flag missing generated hook usage for workflow import in `autogpt_platform/frontend/src/app/(platform)/library/components/LibraryImportWorkflowDialog/`.
Applied to files:
autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/api/features/**/*.py : Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development
Applied to files:
autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.
Applied to files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/test/test_migrate_webhook_presets.pyautogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.
Applied to files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/test/test_migrate_webhook_presets.pyautogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.
Applied to files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/test/test_migrate_webhook_presets.pyautogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.
Applied to files:
autogpt_platform/backend/backend/api/features/v1.pyautogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/**/test/**/*.py : Use snapshot testing with '--snapshot-update' flag in backend tests when output changes; always review with 'git diff'
Applied to files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : When creating snapshots in tests, use `poetry run pytest path/to/test.py --snapshot-update`; always review snapshot changes with `git diff` before committing
Applied to files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : Mock at boundaries — mock where the symbol is **used**, not where it's **defined**; after refactoring, update mock targets to match new module paths
Applied to files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : Use `AsyncMock` from `unittest.mock` for async functions in tests
Applied to files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : Use pytest with snapshot testing for API responses
Applied to files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-03-19T15:10:50.676Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12483
File: autogpt_platform/backend/backend/copilot/tools/test_dry_run.py:298-303
Timestamp: 2026-03-19T15:10:50.676Z
Learning: When using Python’s `unittest.mock.patch` in tests, choose the patch target based on how the imported name is resolved:
- If the code under test uses an **eager/module-level import** (e.g., `from foo.bar import baz` at module top), patch **the module where the name is looked up** (i.e., where it is used in the SUT), e.g. `patch("mymodule.baz")`.
- If the code under test uses a **lazy import** executed later (e.g., `from foo.bar import baz` inside a function/branch), patch **the source module** (e.g., `patch("foo.bar.baz")`) because the late `from ... import` will read the (potentially patched) name from the source module at call time.
For a concrete example: if `simulate_block` is imported inside an `if dry_run:` block in the SUT, then the correct test patch target is the source module path for `simulate_block` as it exists at call time (e.g., `patch("backend.executor.simulator.simulate_block")`), not the test file’s import location.
Applied to files:
autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-03-05T00:13:36.338Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/backend/backend/api/features/store/db.py:1206-1221
Timestamp: 2026-03-05T00:13:36.338Z
Learning: In `autogpt_platform/backend/backend/api/features/store/db.py`, the `_approve_sub_agent` helper intentionally does NOT set `ActiveVersion` on the `StoreListing` when auto-approving sub-agents. Sub-agents are created with `isAvailable=False` (see `_create_sub_agent_version_data`), so they do not appear in public store views and do not need an active version connected. Do not flag the absence of `ActiveVersion` assignment in this function as a bug.
Applied to files:
autogpt_platform/backend/backend/api/features/library/db.py
🔇 Additional comments (3)
autogpt_platform/backend/backend/api/features/library/db.py (2)
634-641: Library update path migration hook looks correct.This keeps behavior aligned with the API activation flows and prevents library-based version updates from leaving webhook presets pinned.
1837-1879: Migration helper filtering is well-scoped.The bulk update correctly limits changes to owned, non-deleted, webhook-attached presets on the target graph and only when version differs.
autogpt_platform/backend/test/test_migrate_webhook_presets.py (1)
11-70: Targeted unit coverage for migration behavior is solid.These tests clearly verify both the update filter contract and migrated-count behavior across match/no-match scenarios.
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## dev #12753 +/- ##
==========================================
- Coverage 63.14% 63.08% -0.06%
==========================================
Files 1811 1811
Lines 130463 130471 +8
Branches 14260 14264 +4
==========================================
- Hits 82376 82313 -63
- Misses 45495 45567 +72
+ Partials 2592 2591 -1
Flags with carried forward coverage won't be shown. Click here to find out more.
🚀 New features to boost your workflow:
|
| - Are for the given graph but pinned to an older version | ||
|
|
||
| Args: | ||
| user_id: The owner of the presets. |
There was a problem hiding this comment.
🤖 🟠 Should Fix: Migration can run on presets that are already at the new version or at a newer version.
The filter "agentGraphVersion": {"not": new_version} correctly skips presets already at new_version, but it will also update presets pinned to a higher version (e.g., version 7 when publishing version 5 via a non-active path). In practice, a user can have manually pinned a preset to a future version or to a specific version for testing purposes.
A safer filter would be:
"agentGraphVersion": {"lt": new_version}This ensures only presets pinned to older versions are promoted, never newer ones.
| await library_db.migrate_webhook_presets_to_new_version( | ||
| user_id=user_id, | ||
| graph_id=graph_id, | ||
| new_version=new_graph_version.version, |
There was a problem hiding this comment.
🤖 🟡 Nice to Have: Migration call in update_graph is only reachable when new_graph_version.is_active is true.
Looking at the update_graph handler in v1.py, the webhook migration is placed inside the if new_graph_version.is_active: block. This means publishing a new version that is not yet set as active does not trigger migration. That is probably the correct intent (you only want to migrate when the new version becomes the live one), but the docstring for migrate_webhook_presets_to_new_version says it handles the "publish" flow in general.
Please verify: if a user saves a draft version that has a webhook trigger, should existing webhook presets be migrated? If no, update the docstring to clarify. If yes, the call needs to be unconditional.
| @pytest.fixture | ||
| def mock_prisma(): | ||
| with patch("prisma.models.AgentPreset.prisma") as mock: | ||
| mock_client = AsyncMock() |
There was a problem hiding this comment.
🤖 🟡 Nice to Have: Mock target patches prisma.models.AgentPreset.prisma (definition site) rather than the usage site.
Per repo convention (see copilot-instructions.md), tests should patch where the symbol is used, not where it is defined. The migrate_webhook_presets_to_new_version function in library/db.py imports prisma at the module level. The patch target should be:
patch("backend.api.features.library.db.prisma.models.AgentPreset.prisma")The current target ("prisma.models.AgentPreset.prisma") may work by coincidence in this case, but it is fragile — if the prisma module is re-imported or aliased, the mock will not intercept the call.
majdyz
left a comment
There was a problem hiding this comment.
🤖 REQUEST CHANGES — PR #12753 (fix(backend): auto-migrate webhook presets to new agent version on publish)
Good contribution and a real pain point worth fixing. The core approach (bulk-update webhook-attached presets via Prisma on publish/activate) is sound and the user-scoping filter (userId, agentGraphId, isDeleted: false) is correct. Tests are present and cover the happy path.
Blocker:
Version filter is too broad. The "agentGraphVersion": {"not": new_version} condition will update presets that are already pinned to a newer version than the one being published (e.g., a preset at v7 would be downgraded to v5 if someone republishes v5). The docstring even says "pinned to an older version" but the filter does not enforce that. Change to:
"agentGraphVersion": {"lt": new_version}Should fix:
- The test's mock patches
"prisma.models.AgentPreset.prisma"(definition site). Per repo convention, patch at the usage site:"backend.api.features.library.db.prisma.models.AgentPreset.prisma". This is currently fragile.
Nits / clarifications:
- The
update_graphcall site is insideif new_graph_version.is_active:— migration only fires when the new version is published as active. This is probably correct behaviour, but the docstring says "new graph version is published" without that qualifier. Align the docstring with the actual guard. - The PR description's Why/What/How section was left as template comments. Please fill it in — the template placeholder text is still visible.
Overall quality is good — once the version filter is corrected this should be ready to merge.
…blish
When publishing a new agent version, webhook trigger URLs remained pinned to the old version, forcing users to manually reconfigure external integrations (e.g. Telegram bots) on every update.
Root cause: AgentPreset.agentGraphVersion was set at creation time and never updated when a new graph version was activated.
This adds migrate_webhook_presets_to_new_version() which automatically updates all webhook-attached presets to point to the new version when:
The migration only runs when the new version still has a webhook trigger node, and only affects non-deleted, webhook-attached presets owned by the publishing user.
Closes #11679
Why / What / How
Changes 🏗️
Checklist 📋
For code changes:
Example test plan
For configuration changes:
.env.defaultis updated or already compatible with my changesdocker-compose.ymlis updated or already compatible with my changesExamples of configuration changes