Skip to content

fix(backend): auto-migrate webhook presets to new agent version on pu…#12753

Open
abderbejaoui wants to merge 2 commits intoSignificant-Gravitas:devfrom
abderbejaoui:fix/webhook-trigger-version-migration
Open

fix(backend): auto-migrate webhook presets to new agent version on pu…#12753
abderbejaoui wants to merge 2 commits intoSignificant-Gravitas:devfrom
abderbejaoui:fix/webhook-trigger-version-migration

Conversation

@abderbejaoui
Copy link
Copy Markdown

@abderbejaoui abderbejaoui commented Apr 12, 2026

…blish

When publishing a new agent version, webhook trigger URLs remained pinned to the old version, forcing users to manually reconfigure external integrations (e.g. Telegram bots) on every update.

Root cause: AgentPreset.agentGraphVersion was set at creation time and never updated when a new graph version was activated.

This adds migrate_webhook_presets_to_new_version() which automatically updates all webhook-attached presets to point to the new version when:

  • A new graph version is published (PUT /graphs/{id})
  • The active version is changed (PUT /graphs/{id}/versions/active)
  • A graph is updated via the library code path

The migration only runs when the new version still has a webhook trigger node, and only affects non-deleted, webhook-attached presets owned by the publishing user.

Closes #11679

Why / What / How

Changes 🏗️

Checklist 📋

For code changes:

  • I have clearly listed my changes in the PR description
  • I have made a test plan
  • I have tested my changes according to the test plan:
    • ...
Example test plan
  • Create from scratch and execute an agent with at least 3 blocks
  • Import an agent from file upload, and confirm it executes correctly
  • Upload agent to marketplace
  • Import an agent from marketplace and confirm it executes correctly
  • Edit an agent from monitor, and confirm it executes correctly

For configuration changes:

  • .env.default is updated or already compatible with my changes
  • docker-compose.yml is updated or already compatible with my changes
  • I have included a list of my configuration changes in the PR description (under Changes)
Examples of configuration changes
  • Changing ports
  • Adding new services that need to communicate with each other
  • Secrets or environment variable changes
  • New or infrastructure changes such as databases

…blish

When publishing a new agent version, webhook trigger URLs remained pinned
to the old version, forcing users to manually reconfigure external
integrations (e.g. Telegram bots) on every update.

Root cause: AgentPreset.agentGraphVersion was set at creation time and
never updated when a new graph version was activated.

This adds migrate_webhook_presets_to_new_version() which automatically
updates all webhook-attached presets to point to the new version when:
- A new graph version is published (PUT /graphs/{id})
- The active version is changed (PUT /graphs/{id}/versions/active)
- A graph is updated via the library code path

The migration only runs when the new version still has a webhook trigger
node, and only affects non-deleted, webhook-attached presets owned by the
publishing user.

Closes Significant-Gravitas#11679
@abderbejaoui abderbejaoui requested a review from a team as a code owner April 12, 2026 15:05
@abderbejaoui abderbejaoui requested review from Swiftyos and majdyz and removed request for a team April 12, 2026 15:05
@github-project-automation github-project-automation bot moved this to 🆕 Needs initial review in AutoGPT development kanban Apr 12, 2026
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Apr 12, 2026

CLA assistant check
All committers have signed the CLA.

@github-actions
Copy link
Copy Markdown
Contributor

This PR targets the master branch but does not come from dev or a hotfix/* branch.

Automatically setting the base branch to dev.

@github-actions github-actions bot changed the base branch from master to dev April 12, 2026 15:06
@github-actions github-actions bot added platform/backend AutoGPT Platform - Back end size/l labels Apr 12, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 12, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b437c663-9043-403c-9ffb-1a66caa4b1f0

📥 Commits

Reviewing files that changed from the base of the PR and between 737d310 and 097dd21.

📒 Files selected for processing (1)
  • autogpt_platform/backend/backend/api/features/v1.py
📜 Recent review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: check API types
  • GitHub Check: end-to-end tests
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.12)
  • GitHub Check: type-check (3.13)
  • GitHub Check: test (3.13)
  • GitHub Check: type-check (3.11)
  • GitHub Check: type-check (3.12)
  • GitHub Check: Seer Code Review
  • GitHub Check: Analyze (python)
  • GitHub Check: Check PR Status
🧰 Additional context used
📓 Path-based instructions (4)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
autogpt_platform/backend/backend/api/features/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
autogpt_platform/backend/**/api/**/*.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/api/**/*.py: Use Security() instead of Depends() for authentication dependencies to get proper OpenAPI security specification
Follow SSE (Server-Sent Events) protocol: use data: lines for frontend-parsed events (must match Zod schema) and : comment lines for heartbeats/status

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
🧠 Learnings (7)
📓 Common learnings
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/frontend/src/app/api/openapi.json:11897-11900
Timestamp: 2026-03-04T23:58:18.476Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12284`
Backend/frontend OpenAPI codegen convention: In backend/api/features/store/model.py, the StoreSubmission and StoreSubmissionAdminView models define submitted_at: datetime | None, changes_summary: str | None, and instructions: str | None with no default. This is intentional to produce “required but nullable” fields in OpenAPI (properties appear in required[] and use anyOf [type, null]). This matches Prisma’s submittedAt DateTime? and changesSummary String?. Do not flag this as a required/nullable mismatch.
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12536
File: autogpt_platform/frontend/src/app/api/openapi.json:5770-5790
Timestamp: 2026-03-24T21:25:15.983Z
Learning: Repo: Significant-Gravitas/AutoGPT — PR `#12536`
File: autogpt_platform/frontend/src/app/api/openapi.json
Learning: The OpenAPI spec file is auto-generated; per established convention, endpoints generally declare only 200/201, 401, and 422 responses. Do not suggest adding explicit 403/404 response entries for single operations unless planning a repo-wide spec update. Prefer clarifying such behaviors in endpoint descriptions/docstrings instead of altering response maps.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12356
File: autogpt_platform/backend/backend/copilot/constants.py:9-12
Timestamp: 2026-03-10T08:39:22.025Z
Learning: In Significant-Gravitas/AutoGPT PR `#12356`, the `COPILOT_SYNTHETIC_ID_PREFIX = "copilot-"` check in `create_auto_approval_record` (human_review.py) is intentional and safe. The `graph_exec_id` passed to this function comes from server-side `PendingHumanReview` DB records (not from user input); the API only accepts `node_exec_id` from users. Synthetic `copilot-*` IDs are only ever created server-side in `run_block.py`. The prefix skip avoids a DB lookup for a `AgentGraphExecution` record that legitimately does not exist for CoPilot sessions, while `user_id` scoping is enforced at the auth layer and on the resulting auto-approval record.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12622
File: autogpt_platform/backend/backend/copilot/tools/agent_search.py:223-236
Timestamp: 2026-03-31T14:22:29.127Z
Learning: When reviewing code under autogpt_platform/backend/backend/copilot/tools/, the `AgentInfo.graph` field (in agent_search.py / models.py) uses `Graph | None` (the typed `backend.data.graph.Graph` Pydantic model), NOT `dict[str, Any]`. The enrichment function `_enrich_agents_with_graph` calls `graph_db().get_graph(graph_id, version=None, user_id=user_id)` directly rather than going through `get_agent_as_json()` / `graph_to_json()`. This was updated in PR `#12622` (commit 22d05bc).
📚 Learning: 2026-03-31T14:22:29.127Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12622
File: autogpt_platform/backend/backend/copilot/tools/agent_search.py:223-236
Timestamp: 2026-03-31T14:22:29.127Z
Learning: When reviewing code under autogpt_platform/backend/backend/copilot/tools/, the `AgentInfo.graph` field (in agent_search.py / models.py) uses `Graph | None` (the typed `backend.data.graph.Graph` Pydantic model), NOT `dict[str, Any]`. The enrichment function `_enrich_agents_with_graph` calls `graph_db().get_graph(graph_id, version=None, user_id=user_id)` directly rather than going through `get_agent_as_json()` / `graph_to_json()`. This was updated in PR `#12622` (commit 22d05bc).

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-03-05T00:13:36.338Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/backend/backend/api/features/store/db.py:1206-1221
Timestamp: 2026-03-05T00:13:36.338Z
Learning: In `autogpt_platform/backend/backend/api/features/store/db.py`, the `_approve_sub_agent` helper intentionally does NOT set `ActiveVersion` on the `StoreListing` when auto-approving sub-agents. Sub-agents are created with `isAvailable=False` (see `_create_sub_agent_version_data`), so they do not appear in public store views and do not need an active version connected. Do not flag the absence of `ActiveVersion` assignment in this function as a bug.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
🔇 Additional comments (2)
autogpt_platform/backend/backend/api/features/v1.py (2)

1047-1055: Webhook preset migration is correctly gated on webhook presence.

Line 1049 ensures migration runs only when the new version still has a webhook trigger, which matches the intended auto-migration behavior.


1108-1116: Active-version switch path now preserves webhook-linked presets as expected.

Line 1110 applies the same safe webhook guard here, so manual active-version changes stay consistent with publish behavior.


Walkthrough

When a new agent graph version is activated and the graph contains a webhook input node, the code bulk-updates existing AgentPreset rows that have a webhookId to point their agentGraphVersion to the newly activated version via migrate_webhook_presets_to_new_version(...).

Changes

Cohort / File(s) Summary
Webhook Preset Migration Logic
autogpt_platform/backend/backend/api/features/library/db.py
Added migrate_webhook_presets_to_new_version(user_id, graph_id, new_version) async function; performs bulk update_many on AgentPreset filtering by userId, agentGraphId, agentGraphVersion != new_version, webhookId != None, and isDeleted: False; logs and returns migrated count. Integrated call after graph activation.
Migration Integration
autogpt_platform/backend/backend/api/features/v1.py
Invoke migrate_webhook_presets_to_new_version(...) after creating/activating a new graph version (in update_graph() and set_graph_active_version() paths) when the activated graph has webhook_input_node.
Test Coverage
autogpt_platform/backend/test/test_migrate_webhook_presets.py
New pytest module with async tests mocking prisma.models.AgentPreset.prisma.update_many; asserts where filter contents, data update to agentGraphVersion, and correct return counts for 0/1/3 migrations.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant API as "v1 API"
    participant LibDB as "library/db"
    participant DB as "AgentPreset DB"

    User->>API: Request graph publish/update
    API->>LibDB: update_graph_in_library(...) / set_graph_active_version(...)
    LibDB->>DB: Create/activate new graph version
    LibDB-->>API: Return new graph version
    API->>API: Check new_active_graph.webhook_input_node
    alt webhook_input_node present
        API->>LibDB: migrate_webhook_presets_to_new_version(userId, graphId, version)
        LibDB->>DB: UPDATE AgentPreset WHERE userId=X AND agentGraphId=Y AND agentGraphVersion≠version AND webhookId IS NOT NULL AND isDeleted=False SET agentGraphVersion=version
        DB-->>LibDB: return count
        LibDB-->>API: migrated count
    end
    API-->>User: respond with updated version (and migration result)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested labels

Review effort 3/5

Suggested reviewers

  • majdyz
  • Pwuts

Poem

🐇 I hopped through code to shift the tune,
New versions bloom beneath the moon.
Presets leap to the latest gate,
Webhooks follow, never late.
🥕✨

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly indicates the main change: auto-migrating webhook presets to new agent versions on publish, which directly addresses the core issue fixed in this PR.
Description check ✅ Passed The description clearly explains the problem, root cause, the solution (migrate_webhook_presets_to_new_version), and the specific scenarios where it runs, all directly related to the changeset.
Linked Issues check ✅ Passed The PR implementation fully addresses issue #11679 by adding automatic webhook preset migration when new graph versions are published or activated, preserving webhook URLs across agent updates.
Out of Scope Changes check ✅ Passed All changes are tightly scoped to implementing automatic webhook preset migration across three files: the database layer, API endpoints, and comprehensive tests, with no unrelated modifications.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
autogpt_platform/backend/backend/api/features/v1.py (1)

922-929: Add route-level tests for migration invocation conditions.

Nice wiring, but please add API/service-level tests to assert migration is called when webhook_input_node exists and skipped when absent in both activation paths. Current coverage appears limited to the helper itself.

Also applies to: 983-990

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@autogpt_platform/backend/backend/api/features/v1.py` around lines 922 - 929,
Add API/service-level tests that exercise both activation flows to assert
migrate_webhook_presets_to_new_version is invoked only when
new_graph_version.webhook_input_node is truthy: write two tests per activation
path (one where new_graph_version.webhook_input_node is present and one where it
is absent) and mock/stub library_db.migrate_webhook_presets_to_new_version to
verify it was called with user_id, graph_id and new_version when present and not
called when absent; cover both places where this logic appears (the block
referencing new_graph_version.webhook_input_node and the second occurrence
around lines 983-990) so the test suite validates route-level behavior rather
than only the helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@autogpt_platform/backend/backend/api/features/v1.py`:
- Around line 922-929: Add API/service-level tests that exercise both activation
flows to assert migrate_webhook_presets_to_new_version is invoked only when
new_graph_version.webhook_input_node is truthy: write two tests per activation
path (one where new_graph_version.webhook_input_node is present and one where it
is absent) and mock/stub library_db.migrate_webhook_presets_to_new_version to
verify it was called with user_id, graph_id and new_version when present and not
called when absent; cover both places where this logic appears (the block
referencing new_graph_version.webhook_input_node and the second occurrence
around lines 983-990) so the test suite validates route-level behavior rather
than only the helper.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 7d0b4b76-be8a-4d3c-8aa0-96481c096c8c

📥 Commits

Reviewing files that changed from the base of the PR and between ef477ae and 737d310.

📒 Files selected for processing (3)
  • autogpt_platform/backend/backend/api/features/library/db.py
  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📜 Review details
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: check API types
  • GitHub Check: Seer Code Review
  • GitHub Check: test (3.11)
  • GitHub Check: test (3.13)
  • GitHub Check: type-check (3.13)
  • GitHub Check: test (3.12)
  • GitHub Check: type-check (3.12)
  • GitHub Check: Check PR Status
  • GitHub Check: conflicts
  • GitHub Check: end-to-end tests
  • GitHub Check: Analyze (python)
🧰 Additional context used
📓 Path-based instructions (6)
autogpt_platform/backend/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

autogpt_platform/backend/**/*.py: Use Python 3.11 (required; managed by Poetry via pyproject.toml) for backend development
Always run 'poetry run format' (Black + isort) before linting in backend development
Always run 'poetry run lint' (ruff) after formatting in backend development

autogpt_platform/backend/**/*.py: Use poetry run ... command for executing Python package dependencies
Use top-level imports only — avoid local/inner imports except for lazy imports of heavy optional dependencies like openpyxl
Use absolute imports with from backend.module import ... for cross-package imports; single-dot relative imports are acceptable for sibling modules within the same package; avoid double-dot relative imports
Do not use duck typing — avoid hasattr/getattr/isinstance for type dispatch; use typed interfaces/unions/protocols instead
Use Pydantic models over dataclass/namedtuple/dict for structured data
Do not use linter suppressors — no # type: ignore, # noqa, # pyright: ignore; fix the type/code instead
Prefer list comprehensions over manual loop-and-append patterns
Use early return with guard clauses first to avoid deep nesting
Use %s for deferred interpolation in debug log statements for efficiency; use f-strings elsewhere for readability (e.g., logger.debug("Processing %s items", count) vs logger.info(f"Processing {count} items"))
Sanitize error paths by using os.path.basename() in error messages to avoid leaking directory structure
Be aware of TOCTOU (Time-Of-Check-Time-Of-Use) issues — avoid check-then-act patterns for file access and credit charging
Use transaction=True for Redis pipelines to ensure atomicity on multi-step operations
Use max(0, value) guards for computed values that should never be negative
Keep files under ~300 lines; if a file grows beyond this, split by responsibility (extract helpers, models, or a sub-module into a new file)
Keep functions under ~40 lines; extract named helpers when a function grows longer
...

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
  • autogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/backend/backend/api/features/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/{backend,autogpt_libs}/**/*.py

📄 CodeRabbit inference engine (AGENTS.md)

Format Python code with poetry run format

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
  • autogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/backend/**/api/**/*.py

📄 CodeRabbit inference engine (autogpt_platform/backend/AGENTS.md)

autogpt_platform/backend/**/api/**/*.py: Use Security() instead of Depends() for authentication dependencies to get proper OpenAPI security specification
Follow SSE (Server-Sent Events) protocol: use data: lines for frontend-parsed events (must match Zod schema) and : comment lines for heartbeats/status

Files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/api/features/library/db.py
autogpt_platform/backend/**/test/**/*.py

📄 CodeRabbit inference engine (.github/copilot-instructions.md)

Use snapshot testing with '--snapshot-update' flag in backend tests when output changes; always review with 'git diff'

Files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
autogpt_platform/backend/**/test_*.py

📄 CodeRabbit inference engine (autogpt_platform/AGENTS.md)

Create a failing test first using @pytest.mark.xfail decorator (backend) when fixing a bug or adding a feature, then implement the fix and remove the xfail marker

Files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
🧠 Learnings (15)
📓 Common learnings
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12356
File: autogpt_platform/backend/backend/copilot/constants.py:9-12
Timestamp: 2026-03-10T08:39:22.025Z
Learning: In Significant-Gravitas/AutoGPT PR `#12356`, the `COPILOT_SYNTHETIC_ID_PREFIX = "copilot-"` check in `create_auto_approval_record` (human_review.py) is intentional and safe. The `graph_exec_id` passed to this function comes from server-side `PendingHumanReview` DB records (not from user input); the API only accepts `node_exec_id` from users. Synthetic `copilot-*` IDs are only ever created server-side in `run_block.py`. The prefix skip avoids a DB lookup for a `AgentGraphExecution` record that legitimately does not exist for CoPilot sessions, while `user_id` scoping is enforced at the auth layer and on the resulting auto-approval record.
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12566
File: autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts:968-974
Timestamp: 2026-03-26T00:32:06.673Z
Learning: In Significant-Gravitas/AutoGPT, the admin-facing methods in `autogpt_platform/frontend/src/lib/autogpt-server-api/client.ts` (e.g., `addUserCredits`, `getUsersHistory`, `getUserRateLimit`, `resetUserRateLimit`) intentionally follow the legacy `BackendAPI` pattern with manually defined types in `autogpt_platform/frontend/src/lib/autogpt-server-api/types.ts`. Migrating these admin endpoints to the generated OpenAPI hooks (`@/app/api/__generated__/endpoints/`) is a planned separate effort covering all admin endpoints together, not done piecemeal per PR. Do not flag individual admin type additions in `types.ts` as blocking issues.
📚 Learning: 2026-03-31T14:22:29.127Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12622
File: autogpt_platform/backend/backend/copilot/tools/agent_search.py:223-236
Timestamp: 2026-03-31T14:22:29.127Z
Learning: When reviewing code under autogpt_platform/backend/backend/copilot/tools/, the `AgentInfo.graph` field (in agent_search.py / models.py) uses `Graph | None` (the typed `backend.data.graph.Graph` Pydantic model), NOT `dict[str, Any]`. The enrichment function `_enrich_agents_with_graph` calls `graph_db().get_graph(graph_id, version=None, user_id=user_id)` directly rather than going through `get_agent_as_json()` / `graph_to_json()`. This was updated in PR `#12622` (commit 22d05bc).

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-23T06:36:25.447Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/frontend/src/app/(platform)/library/components/LibraryImportWorkflowDialog/useLibraryImportWorkflowDialog.ts:0-0
Timestamp: 2026-03-23T06:36:25.447Z
Learning: In Significant-Gravitas/AutoGPT PR `#12440`, the `LibraryImportWorkflowDialog` (previously `LibraryImportCompetitorDialog`) and its associated generated API hook (`usePostV2ImportACompetitorWorkflowN8nMakeComZapier` / `usePostV2ImportAWorkflowFromAnotherToolN8nMakeComZapier`) were removed in a subsequent refactor. Workflow import from external platforms (n8n, Make.com, Zapier) now uses a server action `fetchWorkflowFromUrl` instead of direct API calls or generated orval hooks. Do not expect or flag missing generated hook usage for workflow import in `autogpt_platform/frontend/src/app/(platform)/library/components/LibraryImportWorkflowDialog/`.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/backend/api/features/**/*.py : Update routes in '/backend/backend/api/features/' and add/update Pydantic models in the same directory for API development

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
📚 Learning: 2026-02-26T17:02:22.448Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12211
File: .pre-commit-config.yaml:160-179
Timestamp: 2026-02-26T17:02:22.448Z
Learning: Keep the pre-commit hook pattern broad for autogpt_platform/backend to ensure OpenAPI schema changes are captured. Do not narrow to backend/api/ alone, since the generated schema depends on Pydantic models across multiple directories (backend/data/, backend/blocks/, backend/copilot/, backend/integrations/, backend/util/). Narrowing could miss schema changes and cause frontend type desynchronization.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
  • autogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-05T15:42:08.207Z
Learnt from: ntindle
Repo: Significant-Gravitas/AutoGPT PR: 12297
File: .claude/skills/backend-check/SKILL.md:14-16
Timestamp: 2026-03-05T15:42:08.207Z
Learning: In Python files under autogpt_platform/backend (recursively), rely on poetry run format to perform formatting (Black + isort) and linting (ruff). Do not run poetry run lint as a separate step after poetry run format, since format already includes linting checks.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
  • autogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-16T16:35:40.236Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12440
File: autogpt_platform/backend/backend/api/features/workflow_import.py:54-63
Timestamp: 2026-03-16T16:35:40.236Z
Learning: Avoid using the word 'competitor' in public-facing identifiers and text. Use neutral naming for API paths, model names, function names, and UI text. Examples: rename 'CompetitorFormat' to 'SourcePlatform', 'convert_competitor_workflow' to 'convert_workflow', '/competitor-workflow' to '/workflow'. Apply this guideline to files under autogpt_platform/backend and autogpt_platform/frontend.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
  • autogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-03-31T15:37:38.626Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12623
File: autogpt_platform/backend/backend/copilot/tools/agent_generator/fixer.py:37-47
Timestamp: 2026-03-31T15:37:38.626Z
Learning: When validating/constructing Anthropic API model IDs in Significant-Gravitas/AutoGPT, allow the hyphen-separated Claude Opus 4.6 model ID `claude-opus-4-6` (it corresponds to `LlmModel.CLAUDE_4_6_OPUS` in `autogpt_platform/backend/backend/blocks/llm.py`). Do NOT require the dot-separated form in Anthropic contexts. Only OpenRouter routing variants should use the dot separator (e.g., `anthropic/claude-opus-4.6`); `claude-opus-4-6` should be treated as correct when passed to Anthropic, and flagged only if it’s used in the OpenRouter path where the dot form is expected.

Applied to files:

  • autogpt_platform/backend/backend/api/features/v1.py
  • autogpt_platform/backend/backend/api/features/library/db.py
📚 Learning: 2026-02-04T16:49:42.490Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: .github/copilot-instructions.md:0-0
Timestamp: 2026-02-04T16:49:42.490Z
Learning: Applies to autogpt_platform/backend/**/test/**/*.py : Use snapshot testing with '--snapshot-update' flag in backend tests when output changes; always review with 'git diff'

Applied to files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : When creating snapshots in tests, use `poetry run pytest path/to/test.py --snapshot-update`; always review snapshot changes with `git diff` before committing

Applied to files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : Mock at boundaries — mock where the symbol is **used**, not where it's **defined**; after refactoring, update mock targets to match new module paths

Applied to files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : Use `AsyncMock` from `unittest.mock` for async functions in tests

Applied to files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-04-08T17:28:23.422Z
Learnt from: CR
Repo: Significant-Gravitas/AutoGPT PR: 0
File: autogpt_platform/backend/AGENTS.md:0-0
Timestamp: 2026-04-08T17:28:23.422Z
Learning: Applies to autogpt_platform/backend/**/*_test.py : Use pytest with snapshot testing for API responses

Applied to files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-03-19T15:10:50.676Z
Learnt from: majdyz
Repo: Significant-Gravitas/AutoGPT PR: 12483
File: autogpt_platform/backend/backend/copilot/tools/test_dry_run.py:298-303
Timestamp: 2026-03-19T15:10:50.676Z
Learning: When using Python’s `unittest.mock.patch` in tests, choose the patch target based on how the imported name is resolved:
- If the code under test uses an **eager/module-level import** (e.g., `from foo.bar import baz` at module top), patch **the module where the name is looked up** (i.e., where it is used in the SUT), e.g. `patch("mymodule.baz")`.
- If the code under test uses a **lazy import** executed later (e.g., `from foo.bar import baz` inside a function/branch), patch **the source module** (e.g., `patch("foo.bar.baz")`) because the late `from ... import` will read the (potentially patched) name from the source module at call time.

For a concrete example: if `simulate_block` is imported inside an `if dry_run:` block in the SUT, then the correct test patch target is the source module path for `simulate_block` as it exists at call time (e.g., `patch("backend.executor.simulator.simulate_block")`), not the test file’s import location.

Applied to files:

  • autogpt_platform/backend/test/test_migrate_webhook_presets.py
📚 Learning: 2026-03-05T00:13:36.338Z
Learnt from: Pwuts
Repo: Significant-Gravitas/AutoGPT PR: 12284
File: autogpt_platform/backend/backend/api/features/store/db.py:1206-1221
Timestamp: 2026-03-05T00:13:36.338Z
Learning: In `autogpt_platform/backend/backend/api/features/store/db.py`, the `_approve_sub_agent` helper intentionally does NOT set `ActiveVersion` on the `StoreListing` when auto-approving sub-agents. Sub-agents are created with `isAvailable=False` (see `_create_sub_agent_version_data`), so they do not appear in public store views and do not need an active version connected. Do not flag the absence of `ActiveVersion` assignment in this function as a bug.

Applied to files:

  • autogpt_platform/backend/backend/api/features/library/db.py
🔇 Additional comments (3)
autogpt_platform/backend/backend/api/features/library/db.py (2)

634-641: Library update path migration hook looks correct.

This keeps behavior aligned with the API activation flows and prevents library-based version updates from leaving webhook presets pinned.


1837-1879: Migration helper filtering is well-scoped.

The bulk update correctly limits changes to owned, non-deleted, webhook-attached presets on the target graph and only when version differs.

autogpt_platform/backend/test/test_migrate_webhook_presets.py (1)

11-70: Targeted unit coverage for migration behavior is solid.

These tests clearly verify both the update filter contract and migrated-count behavior across match/no-match scenarios.

@codecov
Copy link
Copy Markdown

codecov bot commented Apr 12, 2026

Codecov Report

❌ Patch coverage is 45.45455% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 63.08%. Comparing base (b319c26) to head (097dd21).

Additional details and impacted files
@@            Coverage Diff             @@
##              dev   #12753      +/-   ##
==========================================
- Coverage   63.14%   63.08%   -0.06%     
==========================================
  Files        1811     1811              
  Lines      130463   130471       +8     
  Branches    14260    14264       +4     
==========================================
- Hits        82376    82313      -63     
- Misses      45495    45567      +72     
+ Partials     2592     2591       -1     
Flag Coverage Δ
platform-backend 74.56% <45.45%> (-0.07%) ⬇️
platform-frontend-e2e 28.03% <ø> (-0.13%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
Platform Backend 74.56% <45.45%> (-0.07%) ⬇️
Platform Frontend 23.75% <ø> (-0.04%) ⬇️
AutoGPT Libs ∅ <ø> (∅)
Classic AutoGPT 28.43% <ø> (ø)
🚀 New features to boost your workflow:
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

- Are for the given graph but pinned to an older version

Args:
user_id: The owner of the presets.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 🟠 Should Fix: Migration can run on presets that are already at the new version or at a newer version.

The filter "agentGraphVersion": {"not": new_version} correctly skips presets already at new_version, but it will also update presets pinned to a higher version (e.g., version 7 when publishing version 5 via a non-active path). In practice, a user can have manually pinned a preset to a future version or to a specific version for testing purposes.

A safer filter would be:

"agentGraphVersion": {"lt": new_version}

This ensures only presets pinned to older versions are promoted, never newer ones.

await library_db.migrate_webhook_presets_to_new_version(
user_id=user_id,
graph_id=graph_id,
new_version=new_graph_version.version,
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 🟡 Nice to Have: Migration call in update_graph is only reachable when new_graph_version.is_active is true.

Looking at the update_graph handler in v1.py, the webhook migration is placed inside the if new_graph_version.is_active: block. This means publishing a new version that is not yet set as active does not trigger migration. That is probably the correct intent (you only want to migrate when the new version becomes the live one), but the docstring for migrate_webhook_presets_to_new_version says it handles the "publish" flow in general.

Please verify: if a user saves a draft version that has a webhook trigger, should existing webhook presets be migrated? If no, update the docstring to clarify. If yes, the call needs to be unconditional.

@pytest.fixture
def mock_prisma():
with patch("prisma.models.AgentPreset.prisma") as mock:
mock_client = AsyncMock()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 🟡 Nice to Have: Mock target patches prisma.models.AgentPreset.prisma (definition site) rather than the usage site.

Per repo convention (see copilot-instructions.md), tests should patch where the symbol is used, not where it is defined. The migrate_webhook_presets_to_new_version function in library/db.py imports prisma at the module level. The patch target should be:

patch("backend.api.features.library.db.prisma.models.AgentPreset.prisma")

The current target ("prisma.models.AgentPreset.prisma") may work by coincidence in this case, but it is fragile — if the prisma module is re-imported or aliased, the mock will not intercept the call.

Copy link
Copy Markdown
Contributor

@majdyz majdyz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 REQUEST CHANGES — PR #12753 (fix(backend): auto-migrate webhook presets to new agent version on publish)

Good contribution and a real pain point worth fixing. The core approach (bulk-update webhook-attached presets via Prisma on publish/activate) is sound and the user-scoping filter (userId, agentGraphId, isDeleted: false) is correct. Tests are present and cover the happy path.

Blocker:

Version filter is too broad. The "agentGraphVersion": {"not": new_version} condition will update presets that are already pinned to a newer version than the one being published (e.g., a preset at v7 would be downgraded to v5 if someone republishes v5). The docstring even says "pinned to an older version" but the filter does not enforce that. Change to:

"agentGraphVersion": {"lt": new_version}

Should fix:

  • The test's mock patches "prisma.models.AgentPreset.prisma" (definition site). Per repo convention, patch at the usage site: "backend.api.features.library.db.prisma.models.AgentPreset.prisma". This is currently fragile.

Nits / clarifications:

  • The update_graph call site is inside if new_graph_version.is_active: — migration only fires when the new version is published as active. This is probably correct behaviour, but the docstring says "new graph version is published" without that qualifier. Align the docstring with the actual guard.
  • The PR description's Why/What/How section was left as template comments. Please fill it in — the template placeholder text is still visible.

Overall quality is good — once the version filter is corrected this should be ready to merge.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

platform/backend AutoGPT Platform - Back end size/l

Projects

Status: 🚧 Needs work

Development

Successfully merging this pull request may close these issues.

Agent trigger update to new agent version

3 participants