After running azd up or azd provision which then trigger the azd hooks run postprovision, use these steps to verify that all components were deployed correctly and are functioning as expected.
| Component | How to Verify | Expected State |
|---|---|---|
| Fabric Capacity | Azure Portal → Microsoft Fabric capacities | Active (not Paused) |
| Fabric Workspace | app.fabric.microsoft.com | Workspace visible with 3 lakehouses |
| PostgreSQL Flexible Server | Azure Portal → Azure Database for PostgreSQL flexible servers | Ready |
| Microsoft Foundry project | ai.azure.com | Project accessible, models deployed |
| AI Search Index | Azure Portal → AI Search → Indexes | onelake-index exists |
| Purview Scan | Purview Portal → Data Map → Sources | Fabric data source registered |
The Fabric capacity must be in Active state for the workspace and lakehouses to function.
- Navigate to Azure Portal → Microsoft Fabric capacities
- Select your capacity (e.g.,
fabricdev<envname>) - Verify the State shows Active
If the capacity is Paused:
# Resume via Azure CLI
az fabric capacity resume --capacity-name <capacity-name> --resource-group <rg-name>Cost Note: Fabric capacities incur charges while Active. The capacity can be paused when not in use to reduce costs.
-
Navigate to app.fabric.microsoft.com
-
Sign in with your Azure credentials
-
Select the workspace created by the deployment (e.g.,
workspace-<envname>) -
Verify the following lakehouses exist:
- bronze — Raw ingested documents
- silver — Processed/transformed data
- gold — Curated analytics-ready data
-
Open the bronze lakehouse and verify the
Files/documentsfolder structure exists
End-to-end mirroring is not complete when running azd up or post-provisioning. Some steps are manual.
For the full steps (including the Fabric portal New item mirror), follow PostgreSQL mirroring.
The PostgreSQL server must be in Running state to accept connections.
- Navigate to Azure Portal → Azure Database for PostgreSQL flexible servers
- Select the server created by the deployment
- Verify the Status shows Ready and the State shows Running
Use the connection details from the Azure Portal Connection strings blade or from your azd environment values.
psql "host=<server>.postgres.database.azure.com port=5432 dbname=<db-name> user=<username> sslmode=require"- Navigate to Azure Portal → AI Search → your search service
- Go to Indexes and verify
onelake-indexexists - Check the Document count — should be > 0 if documents were uploaded to the bronze lakehouse
- Go to Indexers and verify
onelake-indexershows:- Status: Success
- Last run: Recent timestamp
Note: Uploading new files to the bronze lakehouse does not auto-trigger the indexer. Re-run it manually after uploads:
az search indexer run --name onelake-indexer --service-name <search-name> --resource-group <rg>Re-index after uploads if you do not see new documents:
az search indexer run --name onelake-indexer --service-name <search-name> --resource-group <rg>- In the Search service, go to Search explorer
- Run a simple query:
* - Verify documents are returned
If no documents appear, check:
- Documents exist in
bronze/Files/documents/ - Indexer has run successfully (check indexer execution history)
- Navigate to ai.azure.com
- Sign in and select your Microsoft Foundry project
- Verify:
- Models — Check that GPT-4o and text-embedding-ada-002 (or configured models) are deployed
- Connections — AI Search connection should be listed
- Playground — Test the chat playground with a sample query
Before testing, upload at least one sample PDF into the bronze lakehouse (Files/documents) and re-run the indexer.
Re-run the indexer in the Azure portal:
- Navigate to Azure Portal → AI Search → your search service
- Go to Indexers and select
onelake-indexer - Click Run
Or run it from the CLI:
az search indexer run --name onelake-indexer --service-name <search-name> --resource-group <rg>- In Microsoft Foundry, go to Playgrounds → Chat
- Click Add your data
- Select your AI Search index (
onelake-index) - Ask a question about your indexed documents
If the connection fails, verify RBAC roles are assigned (see Troubleshooting section).
- Navigate to the Microsoft Purview governance portal
- Go to Data Map → Sources
- Verify the Fabric data source is registered at the container level and the collection is
collection-<envname> - Check Scans to confirm the workspace-scoped scan completed
If purviewCollectionName is left empty in infra/main.bicepparam, the automation now uses collection-<AZURE_ENV_NAME>.
If the identity running azd does not have Purview Collection Admin (or equivalent) on the target collection, the Purview scripts will warn and skip collection, datasource, and scan steps. Grant the role, then rerun the Purview scripts.
If you need to rerun the Purview steps after provisioning:
pwsh ./scripts/automationScripts/FabricPurviewAutomation/create_purview_collection.ps1
pwsh ./scripts/automationScripts/FabricWorkspace/CreateWorkspace/register_fabric_datasource.ps1
pwsh ./scripts/automationScripts/FabricPurviewAutomation/trigger_purview_scan_for_fabric_workspace.ps1Lineage appears only after you run data movement or transformation jobs (for example, copying data from bronze to silver). If you have not moved data yet, skip lineage verification.
When networkIsolation is set to true in infra/main.bicepparam during provisioning:
-
Go to Azure Portal → Microsoft Foundry → your account
-
Click Settings → Networking
-
Verify:
- Public network access: Disabled (if fully isolated)
- Private endpoints: Active connections listed
-
Open the Workspace managed outbound access tab to see private endpoints
When accessing Microsoft Foundry from outside the virtual network, you should see an access denied message:
This is expected behavior — the resources are only accessible from within the virtual network.
For network-isolated deployments, use Azure Bastion to access resources:
-
Navigate to Azure Portal → your resource group → Virtual Machine
-
Ensure the VM is Running (start it if stopped)
-
Select Bastion under the Connect menu
-
Enter the VM admin credentials (set during deployment) and click Connect
- Admin username:
vmUserNamein infra/main.bicep - Admin password:
vmAdminPasswordin infra/main.bicepparam (defaults to theVM_ADMIN_PASSWORDenvironment variable) - If you do not have them, reset the password in Azure Portal → Virtual machine → Reset password.
- Admin username:
-
Once connected, open Edge browser and navigate to:
- ai.azure.com — Microsoft Foundry
- app.fabric.microsoft.com — Fabric
-
Complete MFA if prompted
-
You should now have full access to the isolated resources
# Check capacity state
az resource show --ids /subscriptions/<sub>/resourceGroups/<rg>/providers/Microsoft.Fabric/capacities/<name> --query properties.state
# Resume capacity
az fabric capacity resume --capacity-name <name> --resource-group <rg>Verify RBAC roles are assigned to the Microsoft Foundry identities:
# Get the AI Search resource ID
SEARCH_ID=$(az search service show --name <search-name> --resource-group <rg> --query id -o tsv)
# Check role assignments
az role assignment list --scope $SEARCH_ID --output tableRequired roles on the AI Search service:
- Search Service Contributor — For the Microsoft Foundry account and project managed identities
- Search Index Data Contributor — For read/write access to index data
- Search Index Data Reader — For read access to index data
If roles are missing, re-run the RBAC setup:
eval $(azd env get-values)
pwsh ./scripts/automationScripts/OneLakeIndex/06_setup_ai_foundry_search_rbac.ps1-
Verify documents exist in the bronze lakehouse:
- Go to Fabric → bronze lakehouse → Files → documents
- If needed, follow Testing AI Search Connection in Playground to upload a sample PDF
-
Check indexer status:
- Azure Portal → AI Search → Indexers →
onelake-indexer - Review execution history for errors
- Azure Portal → AI Search → Indexers →
-
Manually trigger indexer:
az search indexer run --name onelake-indexer --service-name <search-name> --resource-group <rg>
-
Verify Purview has Fabric workspace access:
- The Purview managed identity needs Contributor role on the Fabric workspace
-
Check scan configuration:
- Purview Portal → Data Map → Sources → Fabric source → Scans
-
Re-run the registration script:
eval $(azd env get-values) pwsh ./scripts/automationScripts/FabricWorkspace/CreateWorkspace/register_fabric_datasource.ps1
To re-run all post-provision hooks:
azd hooks run postprovisionTo run a specific script:
eval $(azd env get-values)
pwsh ./scripts/automationScripts/<path-to-script>.ps1Once verification is complete:
- Upload documents to the bronze lakehouse for indexing (if you haven't already in previous steps)
- Test PostgreSQL connectivity (if you plan to use the database)
- Complete PostgreSQL mirroring in Fabric (if needed) — follow PostgreSQL mirroring
- Test the Microsoft Foundry playground with your indexed content
- Configure additional models if needed
- Deploy your app from the Microsoft Foundry playground
- Review governance in Microsoft Purview








