To deploy this solution accelerator, ensure you have access to an Azure subscription with the necessary permissions to create resource groups, resources, app registrations, and assign roles at the resource group level. This should include Contributor role at the subscription level and Role Based Access Control role on the subscription and/or resource group level. Follow the steps in Azure Account Set Up.
Note: When you deploy this solution, you will automatically be granted access to interact with the Cosmos DB database that stores your application data. Specifically, you'll have permissions to:
- Read database information and settings
- Create, modify, and delete data storage containers (think of these as folders for organizing your data)
- Add, view, update, and remove individual data records within those containers
Check the Azure Products by Region page and select a region where the following services are available:
- Azure AI Foundry
- Azure Container Apps
- Azure Container Registry
- Azure Cosmos DB
- Azure Key Vault
- Azure AI Search
- GPT Model Capacity
Here are some example regions where the services are available: East US, East US2, Japan East, UK South, Sweden Central.
If you encounter issues running PowerShell scripts due to the policy of not being digitally signed, you can temporarily adjust the ExecutionPolicy by running the following command in an elevated PowerShell session:
Set-ExecutionPolicy -Scope Process -ExecutionPolicy BypassThis will allow the scripts to run for the current session without permanently changing your system's policy.
Ensure that you are using the latest version of the Azure Developer CLI.
The azd version must be 1.18.0 or higher.
Upgrade commands by OS:
-
Windows (using winget):
winget install microsoft.azd
-
Linux (using apt):
curl -fsSL https://aka.ms/install-azd.sh | bash -
macOS (using Homebrew):
brew update && brew tap azure/azd && brew install azd
Pick from the options below to see step-by-step instructions for GitHub Codespaces, VS Code Dev Containers, Local Environments, and Bicep deployments.
Deploy in GitHub Codespaces
You can run this solution using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:
-
Open the solution accelerator (this may take several minutes):
-
Accept the default values on the create Codespaces page.
-
Open a terminal window if it is not already open.
-
Continue with the deployment options.
Deploy in VS Code Dev Containers
You can run this solution in VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed).
-
Open the project:
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.
-
Continue with the deployment options.
Deploy in VS Code Web
-
Click the badge above (may take a few minutes to load)
-
Sign in with your Azure account when prompted
-
Select the subscription where you want to deploy the solution
-
Wait for the environment to initialize (includes all deployment tools)
-
Once the solution opens, the AI Foundry terminal will automatically start running the following command to install the required dependencies:
sh install.sh
During this process, you’ll be prompted with the message:
What would you like to do with these files? - Overwrite with versions from template - Keep my existing files unchangedChoose “Overwrite with versions from template” and provide a unique environment name when prompted.
-
Continue with the deployment options.
Deploy in your local Environment
If you're not using one of the above options for opening the project, then you'll need to:
-
Make sure the following tools are installed:
- PowerShell (v7.0+) - available for Windows, macOS, and Linux.
- Azure Developer CLI (azd) (v1.18.0+) - version
- Python 3.9+
- Docker Desktop
- Git
-
Clone the repository or download the project code via command-line:
azd init -t microsoft/Multi-Agent-Custom-Automation-Engine-Solution-Accelerator/
⚠️ Warning: Theazd initcommand will download and initialize the project template. If you run this command in a directory that already contains project files, it may override your existing changes. Only run this command once when setting up the project for the first time. If you need to update an existing project, consider usinggit pullor manually downloading updates instead. -
Open the project folder in your terminal or editor.
-
Continue with the deployment options.
The infra folder of the Multi Agent Solution Accelerator contains the main.bicep Bicep script, which defines all Azure infrastructure components for this solution.
By default, the azd up command uses the main.parameters.json file to deploy the solution. This file is pre-configured for a sandbox environment — ideal for development and proof-of-concept scenarios, with minimal security and cost controls for rapid iteration.
For production deployments, the repository also provides main.waf.parameters.json, which applies a Well-Architected Framework (WAF) aligned configuration. This option enables additional Azure best practices for reliability, security, cost optimization, operational excellence, and performance efficiency, such as:
Prerequisite — Enable the Microsoft.Compute/EncryptionAtHost feature for every subscription (and region, if required) where you plan to deploy VMs or VM scale sets with encryptionAtHost: true. Repeat the registration steps below for each target subscription (and for each region when applicable). This step is required for WAF-aligned (production) deployments.
Steps to enable the feature:
- Set the target subscription:
Run:
az account set --subscription "<YourSubscriptionId>" - Register the feature (one time per subscription):
Run:
az feature register --name EncryptionAtHost --namespace Microsoft.Compute - Wait until registration completes and shows "Registered":
Run:
az feature show --name EncryptionAtHost --namespace Microsoft.Compute --query properties.state -o tsv - Refresh the provider (if required):
Run:
az provider register --namespace Microsoft.Compute - Re-run the deployment after registration is complete.
Note: Feature registration can take several minutes. Ensure the feature is registered before attempting deployments that require encryptionAtHost.
Reference: Azure Host Encryption — https://learn.microsoft.com/azure/virtual-machines/disks-enable-host-based-encryption-portal?tabs=azure-cli
- Enhanced network security (e.g., Network protection with private endpoints)
- Stricter access controls and managed identities
- Logging, monitoring, and diagnostics enabled by default
- Resource tagging and cost management recommendations
How to choose your deployment configuration:
- Use the default
main.parameters.jsonfile for a sandbox/dev environment - For a WAF-aligned, production-ready deployment, copy the contents of
main.waf.parameters.jsonintomain.parameters.jsonbefore runningazd up
By default, the solution sets the VM administrator username and password from environment variables. If you do not configure these values, a randomly generated GUID will be used for both the username and password.
To set your own VM credentials before deployment, use:
azd env set AZURE_ENV_VM_ADMIN_USERNAME <your-username>
azd env set AZURE_ENV_VM_ADMIN_PASSWORD <your-password>Tip
Always review and adjust parameter values (such as region, capacity, security settings and log analytics workspace configuration) to match your organization’s requirements before deploying. For production, ensure you have sufficient quota and follow the principle of least privilege for all identities and role assignments.
Important
The WAF-aligned configuration is under active development. More Azure Well-Architected recommendations will be added in future updates.
Pick from the options below to see step-by-step instructions for GitHub Codespaces, VS Code Dev Containers, Local Environments, and Bicep deployments.
Deploy in GitHub Codespaces
You can run this solution using GitHub Codespaces. The button will open a web-based VS Code instance in your browser:
-
Open the solution accelerator (this may take several minutes):
-
Accept the default values on the create Codespaces page.
-
Open a terminal window if it is not already open.
-
Continue with the deploying steps.
Deploy in VS Code
You can run this solution in VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed).
-
Open the project:
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window.
-
Continue with the deploying steps.
Deploy in your local Environment
If you're not using one of the above options for opening the project, then you'll need to:
-
Make sure the following tools are installed:
- PowerShell (v7.0+) - available for Windows, macOS, and Linux.
- Azure Developer CLI (azd) (v1.18.0+) - version
- Python 3.9+
- Docker Desktop
- Git
-
Clone the repository or download the project code via command-line:
azd init -t microsoft/Multi-Agent-Custom-Automation-Engine-Solution-Accelerator/
-
Open the project folder in your terminal or editor.
-
Continue with the deploying steps.
Consider the following settings during your deployment to modify specific settings:
Configurable Deployment Settings
When you start the deployment, most parameters will have default values, but you can update the following settings here:
| Setting | Description | Default value |
|---|---|---|
| Environment Name | Used as a prefix for all resource names to ensure uniqueness across environments. | macae |
| Azure Region | Location of the Azure resources. Controls where the infrastructure will be deployed. | swedencentral |
| OpenAI Deployment Location | Specifies the region for OpenAI resource deployment. | swedencentral |
| Model Deployment Type | Defines the deployment type for the AI model (e.g., Standard, GlobalStandard). | GlobalStandard |
| GPT Model Name | Specifies the name of the GPT model to be deployed. | gpt-4o |
| GPT Model Version | Version of the GPT model to be used for deployment. | 2024-08-06 |
| GPT Model Capacity | Sets the GPT model capacity. | 150 |
| Image Tag | Docker image tag used for container deployments. | latest |
| Enable Telemetry | Enables telemetry for monitoring and diagnostics. | true |
| Existing Log Analytics Workspace | To reuse an existing Log Analytics Workspace ID instead of creating a new one. | (none) |
| Existing Azure AI Foundry Project | To reuse an existing Azure AI Foundry Project ID instead of creating a new one. | (none) |
[Optional] Quota Recommendations
By default, the GPT model capacity in deployment is set to 140k tokens.
To adjust quota settings, follow these steps.
Reusing an Existing Log Analytics Workspace
Guide to get your Existing Workspace ID
Reusing an Existing Azure AI Foundry Project
Guide to get your Existing Project ID
Once you've opened the project in Codespaces, Dev Containers, or locally, you can deploy it to Azure by following these steps:
⚠️ Critical: If you're redeploying or have deployed this solution before, you must create a fresh environment to avoid conflicts and deployment failures.
Choose one of the following before deployment:
Option A: Create a completely new environment (Recommended)
azd env new <new-environment-name>Option B: Reinitialize in a new directory
# Navigate to a new directory
cd ../my-new-deployment
azd init -t microsoft/Multi-Agent-Custom-Automation-Engine-Solution-Accelerator/💡 Why is this needed? Azure resources maintain state information tied to your environment. Reusing an old environment can cause naming conflicts, permission issues, and deployment failures.
-
Login to Azure:
azd auth login
azd auth login --tenant-id <tenant-id>
-
Provision and deploy all the resources:
azd up
Note: This solution accelerator requires Azure Developer CLI (azd) version 1.18.0 or higher. Please ensure you have the latest version installed before proceeding with deployment. Download azd here.
-
Provide an
azdenvironment name (e.g., "macaeapp"). -
Select a subscription from your Azure account and choose a location that has quota for all the resources.
- This deployment will take 4-6 minutes to provision the resources in your account and set up the solution with sample data.
- If you encounter an error or timeout during deployment, changing the location may help, as there could be availability constraints for the resources.
- Upon successful completion, you will see a success message indicating that all resources have been deployed, along with the application URL and next steps for uploading team configurations and sample data.
-
After deployment completes, you can upload Team Configurations using command printed in the terminal. The command will look like one of the following. Run the appropriate command for your shell from the project root:
-
For Bash (Linux/macOS/WSL):
bash infra/scripts/selecting_team_config_and_data.sh
-
For PowerShell (Windows):
infra\scripts\Selecting-Team-Config-And-Data.ps1
-
[Optional] Set up authentication for your web application by following the steps in Set Up Authentication in Azure App Service.
-
When Deployment is complete, follow steps in Set Up Authentication in Azure App Service to add app authentication to your web app running on Azure App Service
-
If you are done trying out the application, you can delete the resources by running
azd down.
If you encounter any issues during the deployment process, please refer troubleshooting document for detailed steps and solutions.
Now that you've completed your deployment, you can start using the solution.
To help you get started, here are some Sample Questions you can follow to try it out.
If you are done trying out the application, you can delete all resources by running:
azd downNote: If you deployed with
enableRedundancy=trueand Log Analytics workspace replication is enabled, you must first disable replication before runningazd downelse resource group delete will fail. Follow the steps in Handling Log Analytics Workspace Deletion with Replication Enabled, wait until replication returnsfalse, then runazd down.
Note for macOS Developers: If you are using macOS on Apple Silicon (ARM64) the DevContainer will not work. This is due to a limitation with the Azure Functions Core Tools (see here).
The easiest way to run this accelerator is in a VS Code Dev Containers, which will open the project in your local VS Code using the Dev Containers extension:
-
Start Docker Desktop (install it if not already installed)
-
In the VS Code window that opens, once the project files show up (this may take several minutes), open a terminal window
The solution contains a development container with all the required tooling to develop and deploy the accelerator. To deploy the Multi-Agent solutions accelerator using the provided development container you will also need:
If you are running this on Windows, we recommend you clone this repository in WSL
git clone https://github.com/microsoft/Multi-Agent-Custom-Automation-Engine-Solution-AcceleratorOpen the cloned repository in Visual Studio Code and connect to the development container.
code .!!! tip Visual Studio Code should recognize the available development container and ask you to open the folder using it. For additional details on connecting to remote containers, please see the Open an existing folder in a container quickstart.
When you start the development container for the first time, the container will be built. This usually takes a few minutes. Please use the development container for all further steps.
The files for the dev container are located in /.devcontainer/ folder.
-
Clone the repository.
-
Log into the Azure CLI:
- Check your login status using:
az account show
- If not logged in, use:
az login
- To specify a tenant, use:
az login --tenant <tenant_id>
- Check your login status using:
-
Create a Resource Group:
- You can create it either through the Azure Portal or the Azure CLI:
az group create --name <resource-group-name> --location EastUS2
- You can create it either through the Azure Portal or the Azure CLI:
-
Deploy the Bicep template:
-
You can use the Bicep extension for VSCode (Right-click the
.bicepfile, then select "Show deployment plan") or use the Azure CLI:az deployment group create -g <resource-group-name> -f infra/main.bicep --query 'properties.outputs'
-
Note: You will be prompted for a
principalId, which is the ObjectID of your user in Entra ID. To find it, use the Azure Portal or run:az ad signed-in-user show --query id -o tsv
You will also be prompted for locations for Cosmos and OpenAI services. This is to allow separate regions where there may be service quota restrictions.
-
Additional Notes:
Role Assignments in Bicep Deployment:
The main.bicep deployment includes the assignment of the appropriate roles to AOAI and Cosmos services. If you want to modify an existing implementation—for example, to use resources deployed as part of the simple deployment for local debugging—you will need to add your own credentials to access the Cosmos and AOAI services. You can add these permissions using the following commands:
az cosmosdb sql role assignment create --resource-group <solution-accelerator-rg> --account-name <cosmos-db-account-name> --role-definition-name "Cosmos DB Built-in Data Contributor" --principal-id <aad-user-object-id> --scope /subscriptions/<subscription-id>/resourceGroups/<solution-accelerator-rg>/providers/Microsoft.DocumentDB/databaseAccounts/<cosmos-db-account-name>
az role assignment create --assignee <aad-user-upn> --role "Azure AI User" --scope /subscriptions/<subscription-id>/resourceGroups/<solution-accelerator-rg>/providers/Microsoft.CognitiveServices/accounts/<azure-ai-foundry-name>
Using a Different Database in Cosmos:
You can set the solution up to use a different database in Cosmos. For example, you can name it something like macae-dev. To do this:
- Change the environment variable COSMOSDB_DATABASE to the new database name.
- You will need to create the database in the Cosmos DB account. You can do this from the Data Explorer pane in the portal, click on the drop down labeled "+ New Container" and provide all the necessary details.
-
-
Create a
.envfile:- Navigate to the
src\backendfolder and create a.envfile based on the provided.env.samplefile. - Update the
.envfile with the required values from your Azure resource group in Azure Portal App Service environment variables. - Alternatively, if resources were
provisioned using
azd provisionorazd up, a.envfile is automatically generated in the.azure/<env-name>/.envfile. You can copy the contents of this file into your backend.envfile.
Note: To get your
<env-name>runazd env listto see which env is default. - Navigate to the
-
Fill in the
.envfile:- Use the output from the deployment or check the Azure Portal under "Deployments" in the resource group.
- Make sure to set APP_ENV to "dev" in
.envfile. - For local development, make sure to include below env variables in the
.envBACKEND_API_URL=http://localhost:8000FRONTEND_SITE_NAME=http://127.0.0.1:3000MCP_SERVER_ENDPOINT=http://localhost:9000/mcp.
-
(Optional) Set up a virtual environment:
- If you are using
venv, create and activate your virtual environment for both the frontend and backend folders.
- If you are using
-
Install requirements - Backend:
- To install the requirement for backend -
Open a terminal in the
src/backendfolder and run:pip install uv uv sync
- To install the requirement for backend -
Open a terminal in the
-
Build the frontend (important):
- To install the requirement for frontend -
Open a terminal in the
src/frontendfolder and run:
pip install -r requirements.txt
-
Before running the frontend server, you must build the frontend to generate the necessary
build/assetsdirectory.From the
src/frontenddirectory, run:npm install npm run build
- To install the requirement for frontend -
Open a terminal in the
-
Install requirements - MCP server:
- To install the requirement for mcp server -
Open a terminal in the
src/mcp_serverfolder and run:pip install uv uv sync
- Run the application:
- From the
src/backenddirectory activate the virtual environment created through step 8 and Run:
python app.py- In a new terminal from the src/frontend directory
python frontend_server.pyor Run
npm run dev- Open a browser and navigate to
http://localhost:3000 - To see swagger API documentation, you can navigate to
http://localhost:8000/docs
To Deploy your local changes rename the below files.
- Rename
azure.yamltoazure_custom2.yamlandazure_custom.yamltoazure.yaml. - Go to
infradirectory- Remove
main.biceptomain_custom2.bicepandmain_custom.biceptomain.bicep. Continue with the deploying steps.
- Remove
You can debug the API backend running locally with VSCode using the following launch.json entry:
{
"name": "Debug Backend (FastAPI)",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/src/backend/app_kernel.py",
"cwd": "${workspaceFolder}/src/backend",
"console": "integratedTerminal",
"justMyCode": false,
"python": "${workspaceFolder}/src/backend/.venv/Scripts/python.exe",
"env": {
"PYTHONPATH": "${workspaceFolder}/src/backend",
"UVICORN_LOG_LEVEL": "debug"
},
"args": [],
"serverReadyAction": {
"pattern": "Uvicorn running on (https?://[^\\s]+)",
"uriFormat": "%s",
"action": "openExternally"
}
}
To debug the python server in the frontend directory (frontend_server.py) and related, add the following launch.json entry:
{
"name": "Python Debugger: Frontend",
"type": "debugpy",
"request": "launch",
"cwd": "${workspaceFolder}/src/frontend",
"module": "uvicorn",
"args": ["frontend_server:app", "--port", "3000", "--reload"],
"jinja": true
}
To debug the MCP server by adding the following launch.json entry:
{
"name": "Debug MCP Server",
"type": "debugpy",
"request": "launch",
"program": "${workspaceFolder}/src/mcp_server/mcp_server.py",
"cwd": "${workspaceFolder}/src/mcp_server",
"console": "integratedTerminal",
"justMyCode": false,
"python": "${workspaceFolder}/src/mcp_server/.venv/Scripts/python.exe",
"env": {
"PYTHONPATH": "${workspaceFolder}/src/mcp_server"
},
"args": [
"--transport", "streamable-http",
"--host", "0.0.0.0",
"--port", "9000"
]
}