You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you start the deployment, most parameters will have **default values**, but you can update the following settings by following the steps [here](../docs/CustomizingAzdParameters.md):
|**Azure Region**| The region where resources will be created. | East US |
118
-
|**Azure AI Content Understanding Location**| Location for the **Content Understanding** service. | Sweden Central |
119
-
|**Secondary Location**| A **less busy** region for**Azure Cosmos DB**, usefulincase of availability constraints. | eastus2 |
120
-
|**Deployment Type**| Select from a drop-down list. | GlobalStandard |
121
-
|**GPT Model**| Choose from **gpt-4o**. | gpt-4o |
122
-
|**GPT Model Version**| GPT model version used in the deployment. | 2024-08-06 |
123
-
| **GPT Model Deployment Capacity** | Configure capacity for **GPT models**. | 30k |
124
-
| **Use Local Build** | Boolean flag to determine if local container builds should be used. | false |
125
-
| **Image Tag** | Image version for deployment (allowed values: `latest`, `dev`, `hotfix`). | latest |
126
-
| **Existing Log Analytics Workspace** | To reuse an existing Log Analytics Workspace ID instead of creating a new one. | *(none)* |
127
-
| **Existing Azure AI Foundry Project** | To reuse an existing Azure AI Foundry Project ID instead of creating a new one. | *(none)* |
128
-
113
+
When you start the deployment, most parameters will have **default values**, but you can update the following settings by following the steps [here](../docs/CustomizingAzdParameters.md)
129
114
130
115
</details>
131
116
@@ -213,12 +198,14 @@ Once you've opened the project in [Codespaces](#github-codespaces), [Dev Contain
213
198
- **Linux/macOS**:
214
199
```bash
215
200
cd ./infra/scripts/
201
+
216
202
./docker-build.sh
217
203
```
218
204
219
205
- **Windows (PowerShell)**:
220
206
```powershell
221
207
cd .\infra\scripts\
208
+
222
209
.\docker-build.ps1
223
210
```
224
211
@@ -239,19 +226,35 @@ Once you've opened the project in [Codespaces](#github-codespaces), [Dev Contain
239
226
- **Execute Script to registering Schemas**
240
227
- Move the folder to samples/schemas in ContentProcessorApi - [/src/ContentProcessorApi/samples/schemas](/src/ContentProcessorApi/samples/schemas)
241
228
242
-
Bash
229
+
230
+
Git Bash
231
+
232
+
```bash
233
+
cd src/ContentProcessorAPI/samples/schemas
234
+
```
235
+
236
+
Powershell
237
+
238
+
```Powershell
239
+
cd .\src\ContentProcessorAPI\samples\schemas\
240
+
```
241
+
242
+
- Then use below command
243
+
244
+
Git Bash
243
245
244
246
```bash
245
247
./register_schema.sh https://<<API Service Endpoint>>/schemavault/ schema_info_sh.json
246
248
```
247
249
248
-
Windows
250
+
Powershell
249
251
250
252
```Powershell
251
253
./register_schema.ps1 https://<< API Service Endpoint>>/schemavault/ .\schema_info_ps1.json
To help you get started, here’s a **sample process** you can follow in the app.
5
+
6
+
## **Process**
7
+
8
+
> Note: Download sample data files for **Invoices** and **Property Claims** from [here](../src/ContentProcessorAPI/samples).
9
+
10
+
### **API Documentation**
11
+
12
+
- Click on **API Documentation** to view and explore the available API endpoints and their details.
13
+
14
+
### **Upload**
15
+
16
+
> Note: Average response time is 01 minute.
17
+
18
+
_Sample Operations:_
19
+
20
+
- Select the **Schema** under the Processing Queue pane.
21
+
- Click on the **Import Content** button.
22
+
- Choose a file from the downloaded list for data extraction corresponding to the **Schema** selected.
23
+
- Click the **Upload** button.
24
+
25
+
### **Review and Process**
26
+
27
+
_Sample Operation:_
28
+
29
+
- Once the file status is marked as completed, click on the file.
30
+
- Once the batch processing is done, the file is ready to review and the extracted data is displayed in the **Output Review** pane and corresponding file is visible in the **Source Document** pane.
31
+
- Edit any incorrect data in the JSON which is shown in the **Output Review** pane under **Extracted Results** tab.
32
+
- Add notes under the **Comments** and save the changes by clicking on the **Save** button.
33
+
- You can view the process steps in the **Output Review** pane under the **Process Steps** tab and expand the extract, Map, and evaluate sections to see the outputs from each process step.
34
+
35
+

36
+
37
+
### **Delete**
38
+
39
+
_Sample operation:_
40
+
41
+
- Click the **three-dot menu** at the end of the row to expand options, then select **Delete** to remove the item.
42
+
43
+
This structured approach ensures that users can efficiently extract key information, and organize structured outputs for easy search and analysis.
0 commit comments