diff --git a/0_Azure/2_AzureAnalytics/0_Fabric/demos/20_FabricCapacityMetrics.md b/0_Azure/2_AzureAnalytics/0_Fabric/demos/20_FabricCapacityMetrics.md
index abd99b0d6..09ce33d5a 100644
--- a/0_Azure/2_AzureAnalytics/0_Fabric/demos/20_FabricCapacityMetrics.md
+++ b/0_Azure/2_AzureAnalytics/0_Fabric/demos/20_FabricCapacityMetrics.md
@@ -16,7 +16,7 @@ Last updated: 2024-11-28
## Wiki
Table of Wiki (Click to expand)
+Lis of References (Click to expand)
- [Install the Microsoft Fabric capacity metrics app](https://learn.microsoft.com/en-us/fabric/enterprise/metrics-app-install?tabs=1st)
- [What is the Microsoft Fabric Capacity Metrics app?](https://learn.microsoft.com/en-us/fabric/enterprise/metrics-app)
@@ -31,8 +31,6 @@ Last updated: 2024-11-28
## Content
-- [Wiki](#wiki)
-- [Content](#content)
- [Microsoft Fabric Capacity Metrics app](#microsoft-fabric-capacity-metrics-app)
- [Installation Steps](#installation-steps)
- [Configuration Steps](#configuration-steps)
@@ -290,3 +288,8 @@ Steps to Access Microsoft Purview via Audit Logs:
+
+
Total Visitors
+
+
List of References (Click to expand)
+
+- [Migrating from Azure Synapse Spark to Fabric](https://learn.microsoft.com/en-us/fabric/data-engineering/migrate-synapse-overview)
+- [Migration planning: Azure Synapse Analytics dedicated SQL pools to Fabric Data Warehouse](https://learn.microsoft.com/en-us/fabric/data-warehouse/migration-synapse-dedicated-sql-pool-warehouse#prepare-for-migration)
+
+
+> - Create a Fabric workspace and link it to your existing Azure Data Lake Storage account.
+> - Import existing notebooks, Spark jobs into the Fabric workspace.
+> - Update references to external tables or files using the OneLake syntax.
+
+ ## Overview
+
+1. **Create a Fabric Workspace**
+ - **Navigate to Microsoft Fabric**: Access the [Microsoft Fabric portal](https://app.fabric.microsoft.com/).
+
+
+
+ - **Create Workspace**: Follow the steps to create a new workspace.
+
+ https://github.com/user-attachments/assets/209f18fd-d14a-4e2f-aff2-d6cb7be13cc1
+
+2. **Link ADLS Account**: Connect your existing Azure Data Lake Storage account to the new Fabric workspace.
+ - **Set Up ADLS Connection**:
+ - **Open Fabric Workspace**: Go to the workspace where you want to set up the connection.
+
+
+
+ - **Create a lakehouse**: Click on `New item`, search for `Lakehouse`:
+
+
+
+
+
+ - **Get Data**: Select the option to get data from Azure Data Lake Storage Gen2.
+ - **Enter ADLS URL**: Provide the URL to your Azure Data Lake Storage Gen2 account.
+ - **Authentication**: Choose the authentication method (e.g., account key, organizational account, service principal, or shared access signature).
+ - **Sign In**: Sign in to your Azure Data Lake Storage account using the selected authentication method.
+ - **Configure Connection**: Complete the configuration by specifying any additional settings required for the connection.
+ - **Create OneLake Shortcut**:
+ - **Navigate to Lakehouse View**: In the Fabric workspace, go to the Lakehouse view.
+ - **Create Shortcut**: Right-click and select "New Shortcut," then choose Azure Data Lake Storage Gen2 as the source.
+ - **Enter Details**: Provide the URL of the ADLS Gen2 account, connection name, authentication method, shortcut name, and subpath pointing to the root of the data you want to reference.
+ - **Create Shortcut**: Click "Create" and wait for the shortcut to appear in the Lakehouse view.
+ - **Preview Data**: Preview the data from the shortcut to ensure it is virtualized from the source.
+2. **Import Existing Notebooks and Spark Jobs**
+ - **Export from Synapse**: Export your notebooks and Spark job definitions from Azure Synapse.
+ - **Import to Fabric**: Use the import functionality in Fabric to bring these items into your new workspace.
+3. **Update References**
+ - **Identify External References**: Locate all references to external tables or files in your notebooks and Spark jobs.
+ - **Modify Syntax**: Update these references to use the OneLake syntax, ensuring they point to the correct locations in Fabric.
+
+
+
+
Total Visitors
+
+
+> All Azure services as for now:
return to Content
diff --git a/0_Azure/README.md b/0_Azure/README.md
index 2b65f1379..a0278dc8e 100644
--- a/0_Azure/README.md
+++ b/0_Azure/README.md
@@ -5,7 +5,7 @@ Costa Rica
[](https://github.com/)
[brown9804](https://github.com/brown9804)
-Last updated: 2025-01-02
+Last updated: 2025-04-08
------------------------------------------
@@ -59,16 +59,18 @@ Last updated: 2025-01-02
@@ -93,7 +95,7 @@ Microsoft Azure is a cloud computing platform that provides a wide range of serv
## Key Azure Products
-Here are some important Azure products within each category:
+> Here are some important Azure products within each category:
### IaaS Products
diff --git a/0_Azure/3_AzureAI/WhatsNew/0_IgniteNews_2024.md b/0_Azure/WhatsNew/0_IgniteNews_2024.md
similarity index 99%
rename from 0_Azure/3_AzureAI/WhatsNew/0_IgniteNews_2024.md
rename to 0_Azure/WhatsNew/0_IgniteNews_2024.md
index e5e45ec9a..992c356eb 100644
--- a/0_Azure/3_AzureAI/WhatsNew/0_IgniteNews_2024.md
+++ b/0_Azure/WhatsNew/0_IgniteNews_2024.md
@@ -6,7 +6,7 @@ Costa Rica
[](https://github.com/)
[brown9804](https://github.com/brown9804)
-Last updated: 2024-12-24
+Last updated: 2025-04-08
----------
diff --git a/0_Azure/3_AzureAI/WhatsNew/1_AIvisionMarch2025.md b/0_Azure/WhatsNew/1_AIvisionMarch2025.md
similarity index 99%
rename from 0_Azure/3_AzureAI/WhatsNew/1_AIvisionMarch2025.md
rename to 0_Azure/WhatsNew/1_AIvisionMarch2025.md
index d616e802d..214d119af 100644
--- a/0_Azure/3_AzureAI/WhatsNew/1_AIvisionMarch2025.md
+++ b/0_Azure/WhatsNew/1_AIvisionMarch2025.md
@@ -6,7 +6,7 @@ Costa Rica
[](https://github.com/)
[brown9804](https://github.com/brown9804)
-Last updated: 2025-03-27
+Last updated: 2025-04-08
----------
diff --git a/0_Azure/WhatsNew/2_FabricConfVegas2025.md b/0_Azure/WhatsNew/2_FabricConfVegas2025.md
new file mode 100644
index 000000000..6bda02500
--- /dev/null
+++ b/0_Azure/WhatsNew/2_FabricConfVegas2025.md
@@ -0,0 +1,361 @@
+# Microsoft Fabric Community Conference (Las Vegas 2025) - Overview
+
+Costa Rica
+
+[](https://github.com)
+[](https://github.com/)
+[brown9804](https://github.com/brown9804)
+
+Last updated: 2025-04-08
+
+----------
+
+> `Las Vegas from March 31 to April 2`. Showcased significant advancements and innovations across different domains of the Fabric platform.
+
+
+
List of References (Click to expand)
+
+- [Announcements from the Microsoft Fabric Community Conference](https://www.microsoft.com/en-us/microsoft-fabric/blog/2024/03/26/announcements-from-the-microsoft-fabric-community-conference/?msockid=38ec3806873362243e122ce086486339)
+- [FabCon 2025: Fueling tomorrow’s AI with new agentic capabilities and security innovations in Fabric](https://www.microsoft.com/en-us/microsoft-fabric/blog/2025/03/31/fabcon-2025-fueling-tomorrows-ai-with-new-agentic-capabilities-and-security-innovations-in-fabric/)
+- [OneLake security overview](https://learn.microsoft.com/en-us/fabric/onelake/security/get-started-security)
+- [Best practices for OneLake security](https://learn.microsoft.com/en-us/fabric/onelake/security/best-practices-secure-data-in-onelake)
+- [What is Mirroring in Fabric?](https://learn.microsoft.com/en-us/fabric/database/mirrored-database/overview)
+- [Introducing Autoscale Billing for Spark in Microsoft Fabric](https://blog.fabric.microsoft.com/en-US/blog/introducing-autoscale-billing-for-data-engineering-in-microsoft-fabric/)
+- [Understand the metrics app Autoscale compute for Spark page](https://learn.microsoft.com/en-us/fabric/enterprise/metrics-app-feature-autoscale-page)
+- [Billing and utilization reporting for Apache Spark in Microsoft Fabric](https://learn.microsoft.com/en-us/fabric/data-engineering/billing-capacity-management-for-spark)
+- [Overview of Copilot for Data Science and Data Engineering (preview)](https://learn.microsoft.com/en-us/fabric/data-engineering/copilot-notebooks-overview)
+- [Transform and enrich data seamlessly with AI functions (Preview)](https://learn.microsoft.com/en-us/fabric/data-science/ai-functions/overview?tabs=pandas)
+- [Summarize text with the ai.summarize function](https://learn.microsoft.com/en-us/fabric/data-science/ai-functions/summarize?tabs=column-summary)
+
+Table of Content (Click to expand)
+
+- [Platform Enhancements](#platform-enhancements)
+ - [OneLake Security Private Preview](#onelake-security-private-preview)
+ - [Command Line Interface CLI](#command-line-interface-cli)
+ - [Variable Library](#variable-library)
+- [Data Integration](#data-integration)
+ - [Gateway Solutions](#gateway-solutions)
+- [Data Engineering & Data Science](#data-engineering--data-science)
+ - [Fabric Data Agents Former AI Skills](#fabric-data-agents-former-ai-skills)
+ - [Copilot & AI Features](#copilot--ai-features)
+ - [Autoscale Billing for Spark](#autoscale-billing-for-spark)
+ - [Copilot Improvements](#copilot-improvements)
+ - [AI Functions](#ai-functions)
+- [Data Warehouse](#data-warehouse)
+- [Real-Time Intelligence](#real-time-intelligence)
+- [Power BI](#power-bi)
+
+
- **Command Line Interface (CLI)**: Execute commands across Fabric using terminal prompts or pre-written scripts.
- **Variable Library**: Efficiently manage variables at a workspace level. |
+
+> Setting Up OneLake Security:
+
+1. **Define Security Roles**: Create roles within OneLake to manage access to data tables or folders. These roles can include Admin, Member, Contributor, and Viewer, each with different levels of access.
+2. **Assign Permissions**: Assign permissions at the workspace level to control access to all items within that workspace. Use the sharing feature to grant users direct access to specific items if they do not need access to the entire workspace
+3. **Configure Item Permissions**: Use the Manage Permissions page to add or remove individual item permissions for users or groups. This allows for fine-grained control over who can access specific data items.
+4. **Set Up Compute Permissions**: Grant data access through the SQL compute engine by restricting access to specific tables and schemas. Implement row and column-level security to further refine access controls.
+5. **Implement User Identity Mode**: Enable user identity mode for SQL Analytics Endpoints to ensure that security policies are enforced based on user roles and permissions
+
+> Best Practices:
+
+1. **Least Privilege Access**: Assign permissions at the appropriate level to ensure that users only have access to the data necessary for their tasks. Avoid over-provisioning to reduce security risks.
+2. **Secure by Workload**: Grant users access to specific data workloads through item permissions, compute permissions, and OneLake data access roles. Configure access to the least privileged level required for the user's job.
+3. **Use Data Access Roles**: Use OneLake data access roles to control access to folders and tables within a lakehouse. This allows for selective access to specific items in a lakehouse.
+4. **Monitor and Audit Access**: Regularly review and audit access permissions to ensure compliance with security policies. Adjust permissions as needed to maintain a secure environment.
+
+### OneLake Security (Private Preview)
+
+> Here are the key aspects:
+
+- **Granular Security Definitions**: OneLake Security enables defining `row and column-level` security directly within OneLake, ensuring consistent security rules across different Fabric engines.
+- **Unified Security Management**: With OneLake Security, `roles can be created to grant access to specific data tables or folders`. These roles can further restrict access using row or column-level security.
+- **Automatic Enforcement**: `Security roles` defined in OneLake `are automatically enforced when querying through Spark notebooks, SQL Analytics Endpoints, and Power BI in Direct Lake mode`.
+- **User Identity Mode**: SQL Analytics Endpoints now support a `user identity` mode, ensuring that OneLake Security is the source of truth for all access through SQL endpoints. This `mode means` that `SQL Analytics Endpoints` can now operate based on individual user identities. This means that access and permissions `are managed at the user level rather than at a broader level (like application or service level)`.
+- **Early Access**: Customers can [sign up for early access](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR_BIbobSVbtGoFFUDM3gfGJUNlBWWVpMNDU5NzY5U1NBQVFHOUJPNE5CNS4u) to provide feedback and experience these capabilities before the broad public preview
+
+### Command Line Interface (CLI)
+
+> The Microsoft Fabric Command Line Interface (CLI), also known as `fabric-cli` or `fab`, provides a powerful tool for executing commands across the Fabric platform using terminal prompts or pre-written scripts. Key features include:
+
+- **Installation and Authentication**: The CLI can be installed using `pip` and supports different `authentication methods, including interactive, service principal, and managed identity authentication`.
+- **Command Execution**: Users can execute a wide range of commands to manage Fabric resources, deploy applications, and automate workflows.
+- **Interactive Mode**: The CLI supports an interactive mode, allowing users to execute commands in a more user-friendly environment.
+- **Integration with CI/CD**: Fabric CLI can be integrated with `Azure DevOps pipelines and GitHub Actions for automated deployments and robust CI/CD pipelines`.
+
+### Variable Library
+
+> Allows users to define and manage variables at the workspace level. This feature enhances efficiency and consistency across various workspace items.
+
+- **Workspace-Level Management**: `Variables can be defined and managed at the workspace level`, making them accessible `across different workspace items such as data pipelines, notebooks, and lakehouse shortcuts`.
+- **Enhanced Data Pipelines**: The Variable Library is `already available for use in data pipelines, allowing for more streamlined and efficient data processing`.
+- **Future Integrations**: Plans are in place to `expand the use of the Variable Library to other areas within the Fabric platform, further enhancing its utility`.
+
+ https://github.com/user-attachments/assets/3ee8eee2-cd0b-4575-aacc-5601c0ab49e2
+
+## Data Integration
+
+| **Category** | **Details** |
+|-----------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Data Integration** | **Database Mirroring**: Supports sources behind firewalls such as Azure SQL Database and Snowflake, with Azure SQL MI coming soon. On-prem SQL Server, Oracle, and Dataverse support is also on the way.|
+
+> Database mirroring -> enables seamless data replication and high availability for critical workloads.
+
+| **Source** | **Support Status** | **Connectivity** | **Technical Details**|
+|-----------------------------------|--------------------|----------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| **Azure SQL Database** | Current Support | On-Premises Data Gateway or VNET Data Gateway for secure and efficient data movement | - **Mirroring Mechanism**: Utilizes SQL’s Change Data Capture (CDC) stack optimized for lake-centric architecture. CDC stores changes locally in the database, while mirroring reads data from the transaction log and publishes it to OneLake.
- **Data Format**: Converts data to Parquet format for analytics.
- **Features**: Supports DDL (Data Definition Language) operations like Alter/Drop/Rename tables/columns, and Truncate tables while mirroring is active.
- **APIs**: Programmatic APIs available for setup and management. |
+| **Snowflake** | Current Support | On-Premises Data Gateway or VNET Data Gateway for secure and efficient data movement | - **Mirroring Mechanism**: Similar to Azure SQL Database, leveraging secure gateways for data replication.
- **Data Format**: Ensures data is analytics-ready in OneLake. |
+| **Azure SQL Managed Instance (MI)**| Coming Soon | Will use On-Premises Data Gateway or VNET Data Gateway for secure connectivity | - **Mirroring Mechanism**: Expected to follow the same CDC-based approach as Azure SQL Database.
- **Data Format**: Will convert data to Parquet format for seamless integration. |
+| **On-Premises SQL Server, Oracle, and Dataverse** | Future Support | Will follow secure connectivity protocols using On-Premises Data Gateway | - **Mirroring Mechanism**: Anticipated to use similar CDC-based replication.
- **Data Format**: Data will be converted to Parquet format for analytics. |
+
+> Installation Steps:
+Azure SQL Database
+
+1. **Enable System Assigned Managed Identity (SAMI)**:
+ - In the [Azure portal](https://portal.azure.com/), navigate to your Azure SQL logical server.
+ - Enable System Assigned Managed Identity (SAMI) with a single step.
+
+
+
+2. **Configure Mirroring**:
+ - Go to the [Fabric workspace](https://app.fabric.microsoft.com) and select the ⚙️.
+ - Choose `Manage connection and gateways` to configure the connection to your Azure SQL Database.
+
+
+
+3. **Set Up Gateway**:
+ - Deploy either the On-Premises Data Gateway or VNET Data Gateway depending on your network setup.
+
+
+
+ - Configure the gateway to securely connect to your Azure SQL Database.
+
+
+
+4. **Initial Replication**:
+ - Start the mirroring process. The initial replication time depends on the size of the data being brought in. Click on `New item`, search for `Mirrored`:
+
+
+
+ - Data is stored in a landing zone in OneLake, improving performance when converting files into delta verti-parquet.
+
+5. **Monitor Replication**:
+ - Use Dynamic Management Views (DMVs) and stored procedures to validate configuration and monitor replication status.
+ - Execute queries like `SELECT * FROM sys.dm_change_feed_log_scan_sessions` to check if changes are properly flowing.
+
+6. **Manage Connections**:
+ - Regularly check and manage connections through the Fabric workspace settings.
+ - Ensure compliance with security protocols and monitor data transfer activities.
+
+7. **Troubleshooting**: If experiencing mirroring problems, perform database-level checks and contact support if needed.
+
+
Snowflake
+
+1. **Configure Snowflake Account**: Ensure your Snowflake account is set up and accessible.
+
+2. **Set Up Gateway**:
+ - Deploy either the On-Premises Data Gateway or VNET Data Gateway depending on your network setup.
+ - Configure the gateway to securely connect to your Snowflake account.
+
+3. **Configure Mirroring**:
+ - Go to the Fabric workspace and select the ⚙️.
+ - Choose `Manage connection and gateways` to configure the connection to your Snowflake account.
+
+
+
+4. **Initial Replication**:
+ - Start the mirroring process. The initial replication time depends on the size of the data being brought in.
+ - Data is stored in a landing zone in OneLake, improving performance when converting files into delta verti-parquet.
+
+5. **Monitor Replication**: Use Snowflake's monitoring tools to validate configuration and monitor replication status.
+
+6. **Manage Connections**:
+ - Regularly check and manage connections through the Fabric workspace settings.
+ - Ensure compliance with security protocols and monitor data transfer activities.
+
+7. **Troubleshooting**: If experiencing mirroring problems, perform account-level checks and contact support if needed.
+
+
Azure SQL Managed Instance (MI)
+
+1. **Enable System Assigned Managed Identity (SAMI)**:
+ - In the Azure portal, navigate to your Azure SQL Managed Instance.
+ - Enable System Assigned Managed Identity (SAMI) with a single step.
+
+2. **Configure Mirroring**:
+ - Go to the Fabric workspace and select the ⚙️.
+ - Choose `Manage connection and gateways` to configure the connection to your Azure SQL Managed Instance.
+
+
+
+3. **Set Up Gateway**:
+ - Deploy either the On-Premises Data Gateway or VNET Data Gateway depending on your network setup.
+ - Configure the gateway to securely connect to your Azure SQL Managed Instance.
+
+4. **Initial Replication**:
+ - Start the mirroring process. The initial replication time depends on the size of the data being brought in.
+ - Data is stored in a landing zone in OneLake, improving performance when converting files into delta verti-parquet.
+
+5. **Monitor Replication**:
+ - Use Dynamic Management Views (DMVs) and stored procedures to validate configuration and monitor replication status.
+ - Execute queries like `SELECT * FROM sys.dm_change_feed_log_scan_sessions` to check if changes are properly flowing.
+
+6. **Manage Connections**:
+ - Regularly check and manage connections through the Fabric workspace settings.
+ - Ensure compliance with security protocols and monitor data transfer activities.
+
+7. **Troubleshooting**: If experiencing mirroring problems, perform instance-level checks and contact support if needed.
+
+
- **Copilot & AI Features**: Available with all paid-SKU, starting with F2.
- **Autoscale Billing for Spark**: Serverless, pay-as-you-go billing, no longer consuming your Fabric capacity when enabled.
- **Copilot Improvements**: Copilot is now pre-installed, supports data-specific queries, and tracks historic interactions.
- **AI Functions**: Apply LLM models for summarization, text generation, classification, sentiment analysis, and translation.
- **Row-Level & Column-Level Security in Spark**. |
+
+### Fabric Data Agents (Former AI Skills)
+
+> Fabric Data Agents, previously known as AI Skills, are integrated with Azure AI Foundry to enhance AI capabilities. These agents are designed to retrieve, process, and present data from various sources effectively, leveraging specialized query languages.
+
+Data Sources
+
+> Fabric Data Agents can access and integrate data from a wide range of sources, ensuring comprehensive data retrieval and analysis. Here are the specific data sources they can interact with:
+
+- **Lakehouse Data**:
+ - **Description**: A unified data platform that combines the best features of data lakes and data warehouses.
+ - **Usage**: Ideal for storing large volumes of raw data and structured data, enabling advanced analytics and machine learning.
+
+- **Warehouse Data**:
+ - **Description**: Traditional data warehouses store structured data optimized for query performance and reporting.
+ - **Usage**: Suitable for business intelligence, reporting, and data analysis tasks.
+
+- **Power BI Semantic Models**:
+ - **Description**: Semantic models in Power BI provide a structured and meaningful representation of data.
+ - **Usage**: Used for creating interactive reports and dashboards, enabling users to explore data visually.
+
+- **KQL Databases**:
+ - **Description**: Databases that use Kusto Query Language (KQL) for querying large datasets.
+ - **Usage**: Commonly used in log analytics, monitoring, and telemetry data analysis.
+Query Languages
+
+> Fabric Data Agents utilize specialized query languages to handle complex queries and data manipulations efficiently:
+
+- **SQL (Structured Query Language)**:
+ - **Usage**: Widely used for managing and querying relational databases.
+ - **Capabilities**: Supports data retrieval, insertion, updating, and deletion operations.
+
+- **KQL (Kusto Query Language)**:
+ - **Usage**: Designed for querying large datasets, particularly in log analytics and telemetry data.
+ - **Capabilities**: Provides powerful data exploration and analysis features.
+
+- **DAX (Data Analysis Expressions)**:
+ - **Usage**: Used in Power BI, Excel, and SQL Server Analysis Services for data modeling and analysis.
+ - **Capabilities**: Enables the creation of calculated columns, measures, and custom aggregations.
+Custom Conversational AI Agents
+
+> Organizations can create custom conversational AI agents leveraging domain expertise. These agents provide real-time insights and enhance data-driven decision-making processes. Key features include:
+
+- **Domain Expertise Integration**: Custom agents can be tailored to specific industry needs, incorporating domain-specific knowledge.
+- **Real-Time Insights**: Agents can analyze data in real-time, providing immediate insights and recommendations.
+- **Enhanced Decision-Making**: By leveraging AI capabilities, organizations can make informed decisions based on comprehensive data analysis.
+