Tag: Microsoft Fabric

AnalyticsAviationMicrosoft Fabric

Bringing Real‑Time Intelligence to Airport Data Streams (Type-B Messages) – Part 2: Setting It Up in Microsoft Fabric

In Part 1, I covered the architecture, the event model, and why airport baggage tracking makes such a compelling real-time analytics scenario. In this post, I want to get hands-on and walk through actually setting this up inside Microsoft Fabric — from creating the workspace artifacts all the way through to running the simulator and querying live events.

Let’s get into it.

Prerequisites

Before you start, you’ll need:

  • Microsoft Fabric workspace with the Real-Time Intelligence workload enabled (F2 or higher capacity, or a Fabric trial)
  • An Azure Event Hubs namespace with a hub (or you can use Fabric’s built-in custom endpoint in Eventstream — more on that below)
  • The Baggage Handling Simulator cloned locally, and Python 3.10+ was installed

If you want to follow along with Azure Event Hubs as the source, a Basic-tier namespace is fine for dev/test. If you’re going direct to Fabric Eventstream, you won’t need Event Hubs at all.

Architecture Overview

Step 1: Create the Fabric Workspace Artifacts

Start by creating a new workspace if you don’t already have one. I created on called “Airport Operations Demo”. Open up your Fabric workspace, and you’ll need to create four main artifacts:

  1. Eventhouse (your KQL database)
  2. Eventstream (ingestion and routing)
  3. KQL Queryset (ad-hoc queries and saved analytics)
  4. Real-Time Dashboard (live visualization)

Create the Eventhouse

From the Fabric workspace, click New → Eventhouse. Give it a name like Airport-Eventhouse. This creates an Eventhouse and a default KQL database inside it.

Once provisioned, open the Eventhouse and navigate to the KQL database. This is where we’ll define the table schemas.

Step 2: Create the Table Schema

We’ll use a Bronze / Silver / Gold medallion structure inside the KQL database. Bronze tables hold raw events exactly as ingested. Silver tables hold cleaned and enriched data. Gold tables hold pre-aggregated views.

NOTE: Due to the number of schema objects, I’m going to be showing a subset across each section below. Please go here to see the rest of the KQL scripts to be applied: https://github.com/calloncampbell/BaggageHandling-TypeB-Simulator/blob/main/kql

Open the Explore your data pane in the KQL database and run the following to create the Bronze tables:

// Bronze: raw airport events
.create table airport_events (
id: string,
source: string,
specversion: string,
type: string,
datacontenttype: string,
dataschema: string,
subject: string,
['time']: datetime,
data: dynamic,
seriesclock: datetime
)
.alter table airport_events policy streamingingestion enable
// Bronze: raw flight operational events
.create table typeb_flight_events (
id: string,
source: string,
specversion: string,
type: string,
datacontenttype: string,
dataschema: string,
subject: string,
['time']: datetime,
data: dynamic,
seriesclock: datetime
)
.alter table typeb_flight_events policy streamingingestion enable
// Bronze: event tables
.create table ['Airport.Passenger.Checkin_v1']
(
___id : string,
___source : string,
___type : string,
___time : datetime,
___subject : string,
flightId : string,
flightNumber : string,
airline : string,
origin : string,
destination : string,
departureUtc : string,
paxId : string,
name : string
)

Ingestion Mapping

When events arrive as JSON via Eventstream, you’ll want an ingestion mapping so the raw JSON gets parsed correctly into the table columns:

// Ingestion mapping for airport events
.create-or-alter table airport_events ingestion json mapping "airport_events_mapping"
```
[
{
"column": "id",
"path": "$.id",
"datatype": "string",
"transform": null
},
{
"column": "source",
"path": "$.source",
"datatype": "string",
"transform": null
},
{
"column": "specversion",
"path": "$.specversion",
"datatype": "string",
"transform": null
},
{
"column": "type",
"path": "$.type",
"datatype": "string",
"transform": null
},
{
"column": "datacontenttype",
"path": "$.datacontenttype",
"datatype": "string",
"transform": null
},
{
"column": "dataschema",
"path": "$.dataschema",
"datatype": "string",
"transform": null
},
{
"column": "subject",
"path": "$.subject",
"datatype": "string",
"transform": null
},
{
"column": "time",
"path": "$.time",
"datatype": "datetime",
"transform": null
},
{
"column": "data",
"path": "$.data",
"datatype": "dynamic",
"transform": null
}
]
```
// Ingestion mapping for flight events
.create-or-alter table typeb_flight_events ingestion json mapping "typeb_flight_events_mapping"
```
[
{
"column": "id",
"path": "$.id",
"datatype": "string",
"transform": null
},
{
"column": "source",
"path": "$.source",
"datatype": "string",
"transform": null
},
{
"column": "specversion",
"path": "$.specversion",
"datatype": "string",
"transform": null
},
{
"column": "type",
"path": "$.type",
"datatype": "string",
"transform": null
},
{
"column": "datacontenttype",
"path": "$.datacontenttype",
"datatype": "string",
"transform": null
},
{
"column": "dataschema",
"path": "$.dataschema",
"datatype": "string",
"transform": null
},
{
"column": "subject",
"path": "$.subject",
"datatype": "string",
"transform": null
},
{
"column": "time",
"path": "$.time",
"datatype": "datetime",
"transform": null
},
{
"column": "data",
"path": "$.data",
"datatype": "dynamic",
"transform": null
}
]
```

Silver Layer via Update Policy

Rather than running a scheduled job to promote Bronze to Silver, Update Policies do this automatically at ingest time. Define a function that transforms the raw event, then attach it as a policy. In the following example I’ve embedded my KQL query directly in the policy. You could instead create a KQL function for this query and then reference it here instead.

.alter table ['Airport.Passenger.Checkin_v1'] policy update
```
[
{
"IsEnabled": true,
"Source": "airport_events",
"Query": "let bags = airport_events
| where type == 'Airport.Passenger.Checkin' and isnull(array_length(data))==true;
let arrays = airport_events
| where type == 'Airport.Passenger.Checkin' and isnull(array_length(data))==false
| mv-expand data;
bags
| union arrays
| project
___id = tostring(id),
___source = tostring(source),
___type = tostring(type),
___time = todatetime(['time']),
___subject = tostring(subject),
flightId = tostring(data.flightId),
flightNumber = tostring(data.flightNumber),
airline = tostring(data.airline),
origin = tostring(data.origin),
destination = tostring(data.destination),
departureUtc = tostring(data.departureUtc),
paxId = tostring(data.paxId),
name = tostring(data.name)",
"IsTransactional": false,
"PropagateIngestionProperties": false
}
]
```
// Legacy table name aliases (functions for backward compatibility)
.create-or-alter function airport_passenger_checkin() {
['Airport.Passenger.Checkin_v1']
}

Every event that lands in BaggageEvents_Bronze is automatically transformed and inserted into BaggageEvents_Silver — no pipelines, no orchestration.

Gold Layer via Materialized Views

Materialized Views pre-aggregate data so your dashboard queries are fast, even against billions of rows. Here is a sample useful ones for this use case:

// Gold: latest status per flight
.create materialized-view flights_current on table flights {
flights
| summarize arg_max(updated, *) by flightId
}

When we’re done, we should see our database schema of tables, materialized views, and functions:

Step 3: Configure the Eventstream

Now wire up the ingestion pipeline. In your Fabric workspace, click New → Eventstream and name it evs-airport-events.

Option A: Azure Event Hubs as Source (what I’m doing)

If you’re publishing events from the Baggage Handling Simulator to Azure Event Hubs:

  1. In the Eventstream canvas, click Add source → Azure Event Hubs
  2. Enter your Event Hubs namespace, hub name, and connection string (or use a Fabric connection)
  3. Set the consumer group — use $Default for dev/test
  4. Set the data format — use json

Option B: Custom Endpoint (no Event Hubs needed)

Fabric Eventstream also exposes a custom endpoint — an HTTPS or AMQP ingest URL you can publish CloudEvents directly to, without needing an external Event Hubs namespace. This is great for demos and local testing.

  1. Click Add source → Custom endpoint
  2. Copy the connection string — you’ll use this in the simulator config

Add the Eventhouse Destination

  1. Click Add destination → Eventhouse
  2. Select the Airport-Eventhouse Eventhouse and the KQL database
  3. Select the airport_events table and the airport_events_mapping ingestion mapping

Once everything is setup, this is what we should see:

One of the nice features in Fabric Eventhouse that we don’t see in Azure Data Explorer is the Entity Diagram, which is currently in preview. Go back to your KQL database main view and click on the entity diagram button:

Step 4: Run the Baggage Handling Simulator

Clone the simulator and install dependencies:

git clone https://github.com/calloncampbell/BaggageHandling-TypeB-Simulator.git
cd BaggageHandlingSimulator
pip install -r requirements.txt

Configure the connection string for your Event Hubs namespace or Fabric custom endpoint in config.json (or as environment variables — check the repo README for the exact format).

Run the simulator:

python -m baggage_simulator.cli --verbose --clock-speed 120 --flight-interval-minutes 5 --max-active-flights 5 --eventhub-conn "**********************" --eventhub-name "airport-events-evh"

The --speed multiplier lets you fast-forward simulation time so you don’t have to wait hours for bags to travel through their lifecycle. With --clock-speed 120, a full flight’s baggage cycle completes in minutes.

Within seconds, you should see events flowing into the Eventstream and landing in the Eventhouse Bronze tables.

Step 5: Query the Data

Open a KQL Queryset in your workspace and start exploring. Here are a few queries I find useful:

Track a specific bag end-to-end

BaggageEvents_Silver
| where BagTagNumber == "0014567891234"
| order by Timestamp asc
| project Timestamp, EventCategory, Location, FlightNumber

Find bags that haven’t been delivered (potential mishandles)

BagLatestStatus
| where EventCategory != "Delivered" and EventCategory != "Lost"
| where Timestamp < ago(2h)
| project BagTagNumber, FlightNumber, EventCategory, Location, Timestamp
| order by Timestamp asc

Baggage throughput by event type over the last hour

BaggageEventsByHour
| where Timestamp > ago(1h)
| summarize Total = sum(EventCount) by EventType
| order by Total desc
| render barchart

Average bag journey time (check-in to delivery) per flight

let CheckIn = BaggageEvents_Silver | where EventCategory == "CheckedIn" | project BagTagNumber, CheckInTime = Timestamp;
let Delivered = BaggageEvents_Silver | where EventCategory == "Delivered" | project BagTagNumber, DeliveredTime = Timestamp;
CheckIn
| join kind=inner Delivered on BagTagNumber
| extend JourneyMinutes = datetime_diff('minute', DeliveredTime, CheckInTime)
| summarize AvgJourneyMinutes = avg(JourneyMinutes), BagCount = count() by bin(CheckInTime, 1h)
| order by CheckInTime desc

Step 6: Build the Real-Time Dashboard

Create a Real-Time Dashboard in your workspace. Add tiles by writing KQL queries directly in the dashboard editor — no separate report tool needed.

You can always view the query for each of the tiles by clicking on the … menu and then view query:

Useful tiles for this scenario:

  • Bags in-flight right now — count of bags with a status other than Delivered or Lost
  • Events per minute — a time chart showing ingestion rate
  • Lost or rejected bags — a table filtered to EventCategory in ("Lost", "Rejected")
  • Baggage throughput by hour — bar or area chart of event volume over time

Set the dashboard auto-refresh to 30 seconds for a live operations feel.

Step 7: Set Up an Activator Alert

Activator is Fabric’s alerting engine, and this is where things get genuinely useful. You can define a rule that watches a KQL query result and triggers an action when a condition is met.

From the Real-Time Dashboard, click Set alert on the “Lost or rejected bags” tile. Configure:

  • Condition: row count > 0
  • Action: send an email, Teams message, or trigger a Power Automate flow
  • Check frequency: every 5 minutes

You can also create Activator items directly from the Eventhouse using Data Activator and write your own detection query — useful for more complex conditions like “bag hasn’t progressed in 45 minutes”:

BagLatestStatus
| where EventCategory !in ("Delivered", "Lost")
| where Timestamp < ago(45m)

Putting It All Together

Here’s the full flow end-to-end:

  1. Simulator generates CloudEvents and publishes to Event Hubs / Eventstream endpoint
  2. Eventstream ingests, routes by event type, and writes to Bronze tables in Eventhouse
  3. Update Policies automatically promote Bronze → Silver on ingest
  4. Materialized Views continuously aggregate Silver → Gold
  5. KQL Querysets power ad-hoc investigation
  6. Real-Time Dashboard shows live operations at a glance
  7. Activator fires alerts when something goes wrong

The whole pipeline is serverless from the Eventstream inward — there’s no infrastructure to manage, no Spark jobs to schedule, and no orchestration to babysit.

That’s what I find most impressive about Fabric RTI for this kind of scenario. The time from “event published” to “insight on a dashboard with an alert configured” is measured in minutes, not weeks.

Enjoy!

References

AnalyticsMicrosoft Fabric

Bringing Real‑Time Intelligence to Airport Data Streams (Type-B Messages) – Part 1

If you’ve ever wondered what happens to your bag after you drop it off at the check-in counter, you’re not alone. There’s an entire world of events firing beneath the surface of every airport, and it turns out it makes for a pretty compelling real-time data scenario.

I recently put together a use case walkthrough on Bringing Real-Time Intelligence to Airport Data Streams using Microsoft Fabric. In this post, I want to break down the architecture, explain the data model, and show how you can build a real-time observability pipeline over something as relatable as baggage tracking.

Why Airports?

Airports are a great analogy for event-driven systems because every action generates a traceable event:

  • You book a flight → event
  • You check in → event
  • You drop off your bag → event

And that bag doesn’t just teleport to the carousel. It travels through a complex network of baggage belts, ramps, weigh stations, scanners, and machinery. It gets loaded onto the plane, unloaded at the destination, sent through customs (maybe), and eventually delivered to the baggage belt — or it gets lost.

Every step is a state change. Every state change is an opportunity to capture data and act on it in real time.

The Event Model

For this use case, I modelled three categories of events published to the stream:

Baggage Events

  • Airport.Baggage.CheckedIn
  • Airport.Baggage.Screened
  • Airport.Baggage.Inspected
  • Airport.Baggage.Rejected
  • Airport.Baggage.Loaded
  • Airport.Baggage.Unloaded
  • Airport.Baggage.CustomsCleared
  • Airport.Baggage.Withheld
  • Airport.Baggage.ArrivedAtBelt
  • Airport.Baggage.Delivered
  • Airport.Baggage.Lost

Flight Operational Events

  • Airport.Flight.Closed
  • Airport.Flight.Departed
  • Airport.Flight.Arrived

Passenger Events

  • Airport.Passenger.CheckedIn

This structure follows a clean domain-driven naming convention that maps naturally to a topic-per-domain strategy in Event Hubs or Fabric Eventstream.

Real Airports Use Type-B Messages

In the real world, airports and airlines don’t talk to each other over REST APIs or CloudEvents — they use Aviation Type-B messages, a fixed-format ASCII text messaging standard that’s been in use since the 1960s. These messages are transmitted over dedicated aviation networks operated by SITA (Société Internationale de Télécommunications Aéronautiques) and ARINC (now part of Collins Aerospace), and they remain the backbone of operational messaging across the global aviation industry today.

The key Type-B message types that map directly to this use case are:

MessageNameDescription
BSMBaggage Source MessageGenerated at check-in; carries the bag tag number, passenger details, and routing
BTMBaggage Transfer MessageUsed for interline transfer bags moving between airlines
BPMBaggage Processed MessageConfirmation that a bag has been processed at a handling point
BUMBaggage Unload MessageSignals that bags have been removed from an aircraft
MVTMovement MessageCommunicates flight departure (AD), arrival (AA), and estimated times (ET)
LDMLoad Distribution MessageDescribes how cargo and baggage are distributed across the aircraft
CPMContainer/Pallet Distribution MessageDetails the ULD (Unit Load Device) positioning on the aircraft

IATA Resolution 753

IATA Resolution 753 mandates that airlines track every bag at a minimum of four key touchpoints:

  1. Passenger handover at check-in
  2. Loading onto the aircraft
  3. Delivery to the transfer area (for connecting flights)
  4. Return to the passenger at arrival

Resolution 753 exists because lost and mishandled bags cost the industry hundreds of millions of dollars annually, and real-time tracking directly reduces that. It came into effect in 2018 and drove significant investment in baggage scanning infrastructure and data exchange across airlines and ground handlers.

Bridging Type-B to Modern Streaming

Here’s where it gets interesting from a platform perspective. Type-B messages carry all the right information — they’re just locked inside a legacy fixed-format protocol on a private network. The modernization opportunity is to parse and bridge those messages into a modern event stream.

In practice, that means something like:

  1. A SITA or ARINC Type-B feed gets received by a gateway or middleware layer
  2. Each message is parsed and mapped to a structured event (e.g., a BSM becomes an Airport.Baggage.CheckedIn CloudEvent)
  3. That event is published to Azure Event Hubs or a Fabric Eventstream endpoint
  4. From there, the full Fabric RTI pipeline takes over

The Baggage Handling Simulator in this demo effectively plays the role of that gateway — it generates CloudEvents that mirror what you’d produce by parsing a real Type-B feed. If you were building this for production, the simulator would be replaced by a Type-B parser wired up to a live SITA or ARINC connection.

Architecture Overview

Here is an overview of the architecture in Microsoft Fabric Real-Time Intelligence:

The pipeline follows a standard Ingest → Analyze → Act pattern inside Microsoft Fabric Real-Time Intelligence:

Ingest & Process

Events are published to Azure Event Hubs or directly to a Microsoft Fabric Eventstream endpoint as CloudEvents. The Real-Time Hub acts as the central discovery and governance point for all streaming sources in the workspace.

Eventstream picks up those events and routes them into the Eventhouse — Fabric’s purpose-built KQL database engine optimized for high-throughput, time-series, and log-style workloads.

Inside the Eventhouse, I use a classic Bronze / Silver / Gold medallion layering approach:

  • Bronze — raw ingested events, exactly as received
  • Silver — cleaned and enriched data via Update Policies (KQL-based transformation rules that fire automatically on ingest)
  • Gold — aggregated and pre-computed views via Materialized Views for fast querying

Analyze & Transform

KQL Querysets sit on top of the Eventhouse and let you write ad-hoc and saved queries in Kusto Query Language. KQL is incredibly expressive for time-series data — you can window events, calculate SLAs, detect anomalies, and join across streams with just a few lines.

Visualize & Act

From there, you have a few options:

  • Real-Time Dashboard — a native Fabric dashboard that auto-refreshes on a schedule or on data change, built directly from KQL queries
  • Power BI — for richer semantic model-based reporting or executive dashboards
  • Activator — Fabric’s alerting and automation engine; you can define rules like “if a bag hasn’t moved in 30 minutes, fire an alert”
  • Data Agents — AI-powered agents that can answer natural language questions over your KQL data

Data can also land in OneLake, so it’s available for downstream batch analytics and data science workloads. This is not enabled by default, so it’s something you would need to turn on.

The Baggage Handling Simulator

To drive the demo, I used the Baggage Handling Simulator — a Python CLI built by Clemens Vasters that simulates realistic airport baggage operations. I forked the repository and then adjusted it for Type-B messaging. You can find my fork on GitHub: calloncampbell/BaggageHandling-TypeB-Simulator at feature/type-b-messages

The simulator generates:

  • Baggage tracking events (check-in, screening, loading, unloading, delivery)
  • Passenger events (check-in, boarding)
  • Flight lifecycle events (scheduled, closed, departed, arrived)

Events are published as CloudEvents to either Azure Event Hubs or a Fabric Eventstream endpoint. Flight schedules are also persisted to SQL Server, which gives you a relational anchor to join against your streaming data if needed.

This is a great reference simulator if you want to explore real-time analytics without having to stand up your own IoT or event infrastructure.

Let’s look at the simulator…

Now let’s look at a basic Real-Time Dashboard:

What I Took Away

What I find compelling about this use case is how approachable it is. Most people have been through an airport. Most people have waited anxiously at a baggage belt. That shared experience makes the data model immediately intuitive — and that makes it a great teaching scenario for real-time streaming concepts.

From a Fabric RTI perspective, this use case demonstrates a few things I think are really powerful:

  1. The medallion pattern works in streaming too. Update Policies and Materialized Views give you that Bronze/Silver/Gold structure without a separate transformation job or orchestration layer.
  2. KQL is a first-class citizen. It’s not just a query language — it’s the transformation layer, the alerting layer, and the visualization layer.
  3. Activator closes the loop. Moving from insight to action inside the same platform — without building custom workflows — is genuinely useful.

If you’re interested in exploring Microsoft Fabric Real-Time Intelligence, this airport scenario is a solid and fun way to get started. In Part 2 of this post, I’ll dig into the Fabric Real-Time Intelligence setup.

Enjoy!

References

AnalyticsMicrosoft Fabric

Data Agent conversations with real-time telemetry with Microsoft Fabric RTI

AnalyticsAzure

Kusto’s 10-Year Evolution at Microsoft

Kusto, the internal service driving Microsoft’s telemetry and several key services, recently marked its 10-year milestone. Over the decade, Kusto has evolved significantly, becoming the backbone for crucial applications such as Sentinel, Application Insights, Azure Data Explorer, and more recently, Eventhouse in Microsoft Fabric. This journey highlights its pivotal role in enhancing data processing, monitoring, and analytics across Microsoft’s ecosystem.

This powerful service has continually adapted to meet the growing demands of Microsoft’s internal and external data needs, underscoring its importance in the company’s broader strategy for data management and analysis.

A Dive into Azure Data Explorer’s Origins

Azure Data Explorer (ADX), initially code-named “Kusto,” has a fascinating backstory. In 2014, it began as a grassroots initiative at Microsoft’s Israel R&D center. The team wanted a name that resonated with their mission of exploring vast data oceans, drawing inspiration from oceanographer Jacques Cousteau. Kusto was designed to tackle the challenges of rapid and scalable log and telemetry analytics, much like Cousteau’s deep-sea explorations.

By 2018, ADX was officially unveiled at the Microsoft Ignite conference, evolving into a fully-managed big data analytics platform. It efficiently handles structured, semi-structured (like JSON), and unstructured data (like free-text). With its powerful querying capabilities and minimal latency, ADX allows users to swiftly explore and analyze data. Remembering its oceanic roots, ADX symbolizes a tribute to the spirit of discovery.

Enjoy!

References

AzureDeveloperEvents

Microsoft Build 2024 Book of News

What is the Book of News? The Microsoft Build 2024 Book of News is your guide to the key news items announced at Build 2024.

As expected there is a lot of focus on Azure and AI, followed by Microsoft 365, Security, Windows, and Edge & Bing. This year the book of news is interactive instead of being a PDF.

Some of my favourite announcements

Azure Cloud Native and Application Platform

Azure Functions

Microsoft Azure Functions is launching several new features to provide more flexibility and extensibility to customers in this era of AI.

Features now in preview include:

  • A Flex Consumption plan that will give customers more flexibility and customization without compromising on available features to run serverless apps.
  • Extension for Microsoft Azure OpenAI Service that will enable customers to easily infuse AI in their apps. Customers will be able to use this extension to build new AI-led apps like retrieval-augmented generation, text completion and chat assistant.
  • Visual Studio Code for the Web will provide a browser-based developer experience to make it easier to get started with Azure Functions. This feature is available for Python, Node and PowerShell apps in the Flex Consumption hosting plan.

Features now generally available include:

  • Azure Functions on Azure Container Apps lets developers use the Azure Container Apps environment to deploy multitype services to a cloud-native solution designed for centralized management and serverless scale.
  • Dapr extension for Azure Functions enables developers to use Dapr’s powerful cloud native building block APIs and a large array of ecosystem components in the native and friendly Azure Functions triggers and bindings programming model. The extension is available to run on Azure Kubernetes Service and Azure Container Apps.

Azure Container Apps

Microsoft Azure Container Apps will include dynamic sessions, in preview, for AI app developers to instantly run large language model (LLM)-generated code or extend/customize software as a service (SaaS) apps in an on-demand, secure sandbox.

Customers will be able to mitigate risks to their security posture, leverage serverless scale for their apps and save months of development work, ongoing configurations and management of compute resources that reduce their cost overhead. Dynamic sessions will provide a fast, sandboxed, ephemeral compute suitable for running untrusted code at scale.

Additional new features, now in preview, include:

  • Support for Java: Java developers will be able to monitor the performance and health of apps with Java metrics such as garbage collection and memory usage.
  • Microsoft .NET Aspire dashboard: With dashboard support for .NET Aspire in Azure Container Apps, developers will be able to access live data about projects and containers in the cloud to evaluate the performance of .NET cloud-native apps and debug errors.

Azure App Service

Microsoft Azure App Service is a cloud platform to quickly build, deploy and run web apps, APIs and other components. These capabilities are now in preview:

  • Sidecar patterns is a way to add extra features to the main app, such as logging, monitoring and caching, without changing the app code. Users will be able to run these features alongside the app and it is supported for both source code and container-based deployments.
  • WebJobs will be integrated with Azure App Service, which means they will share the same compute resources as the web app to help save costs and ensure consistent performance. WebJobs are background tasks that run on the same server as the web app and can perform various functions, such as sending emails, executing bash scripts and running scheduled jobs.
  • GitHub Copilot skills for Azure Migrate will enable users to ask questions like, “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” to get answers and recommendations from Azure Migrate. GitHub Copilot licenses are sold separately.

These capabilities are now generally available:

  • Automatic scaling continuously adjusts the number of servers that run apps based on a combination of demand and server utilization, without any code or complex scaling configurations. This helps users handle dynamically changing site traffic without over-provisioning or under-provisioning the app’s server resources.
  • Availability zones are isolated locations within an Azure region that provide high availability and fault tolerance. Enabling availability zones lets users take advantage of the increased service level agreement (SLA) of 99.99%. For more information, reference the SLA for App Service.
  • TLS 1.3 encryption, the latest version of the protocol that secures communication between apps and the clients, offers faster and more secure connections, as well as better compatibility with modern browsers and devices.

Azure Static Web Apps

To help customers deliver more advanced capabilities, Microsoft Azure Static Web Apps will offer a dedicated pricing plan, now in preview, that supports enterprise-grade features for enhanced networking and data storage. The dedicated plan for Azure Static Web Apps will utilize dedicated compute capacity and will enable:

  • Network isolation to enhance security.
  • Data residency to help customers comply with data management policies and requirements.
  • Enhanced quotas to allow for more custom domains within an app service plan.
  • “Always-on” functionality for Azure Static Web Apps managed functions, which provide built-in API endpoints to connect to backend services.

Azure Logic Apps

Microsoft Azure Logic Apps is a cloud platform where users can create and run automated workflows with little to no code. Updates to the platform include:

An enhanced developer experience:

  • Improved onboarding experience in Microsoft Visual Studio Code: A simplified extension installation experience and improvements on project start and debugging are now generally available.
  • Logic Apps Standard deployment scripting tools in Visual Studio Code: This feature will simplify the process of setting up a continuous integration/continuous delivery (CI/CD) process for Logic Apps Standard by providing support in the tooling to generalize common metadata files and automate the creation of infrastructure scripts to streamline the task of preparing code for automated deployments. This feature is in preview.
  • Support for Zero Downtime deployment scenarios: This will enable Zero Downtime deployment scenarios for Logic Apps Standard by providing support for deployment slots in the portal. This update is in preview.

Expanded functionality and compatibility with Logic Apps Standard:

  • .NET Custom Code Support: Users will be able to extend low-code workflows with the power of .NET 8 by authoring a custom function and calling from a built-in action within the workflow. This feature is in preview.
  • Logic Apps connectors for IBM mainframe and midranges: These connectors allow customers to preserve the value of their workloads running on mainframes and midranges by allowing them to extend to the Azure Cloud without investing more resources in the mainframe or midrange environments using Azure Logic Apps. This update is generally available.
  • Other updates, in preview, include Azure Integration account enhancements and Logic Apps monitoring dashboard.

Azure API Center

Microsoft Azure API Center, now generally available, provides a centralized solution to manage the challenges of API sprawl, which is exacerbated by the rapid proliferation of APIs and AI solutions. The Azure API Center offers a unified inventory for seamless discovery, consumption and governance of APIs, regardless of their type, lifecycle stage or deployment location. This enables organizations to maintain a complete and current API inventory, streamline governance and accelerate consumption by simplifying discovery.

Azure API Management

Azure API Management has introduced new capabilities to enhance the scalability and security of generative AI deployments. These include the Microsoft Azure OpenAI Service token limit policy for fair usage and optimized resource allocation, one-click import of Azure OpenAI Service endpoints as APIs, a Load Balancer for efficient traffic distribution and a Circuit breaker to protect backend services.

Other updates, now generally available, include first-class support for OData API type, allowing easier publication and security of OData APIs, and full support for gRPC API type in self-hosted gateways, facilitating the management of gRPC services as APIs.

Azure Event Grid

Microsoft Azure Event Grid has new features that are tailored to customers who are looking for a pub-sub message broker that can enable Internet of Things (IoT) solutions using MQTT protocol and can help build event-driven apps. These capabilities enhance Event Grid’s MQTT broker capability, make it easier to transition to Event Grid namespaces for push and pull delivery of messages, and integrate new sources. Features now generally available include:

  • Use the Last Will Testament feature, in compliance with MQTT v5 and MQTT v.3.1.1 specifications, so apps receive notifications when clients get disconnected, enabling management of downstream tasks to prevent performance degradation.
  • Create data pipelines that utilize both Event Grid Basic resources and Event Grid Namespace Topics (supported in Event Grid Standard). This means customers can utilize Event Grid namespace capabilities, such as MQTT broker, without needing to reconstruct existing workflows.
  • Support new event sources, such as Microsoft Entra ID and Microsoft Outlook, leveraging Event Grid’s support for the Microsoft Graph API. This means customers can use Event Grid for new use cases, like when a new employee is hired or a new email is received, to process that information and send to other apps for more action.

Azure Data Platform

Real-Time Intelligence in Microsoft Fabric

The new Real-Time Intelligence within Microsoft Fabric will provide an end-to-end software as a service (SaaS) solution that will empower customers to act on high volume, time-sensitive and highly granular data in a proactive and timely fashion to make faster and more-informed business decisions. Real-Time Intelligence, now in preview, will empower user roles such as everyday analysts with simple low-code/no-code experiences, as well as pro developers with code-rich user interfaces.

Features of Real-Time Intelligence will include:

  • Real-Time hub, a single place to ingest, process and route events in Fabric as a central point for managing events from diverse sources across the organization. All events that flow through Real-Time hub will be easily transformed and routed to any Fabric data stores.
  • Event streams that will provide out-of-the-box streaming connectors to cross cloud sources and content-based routing that helps remove the complexity of ingesting streaming data from external sources.
  • Event house and real-time dashboards with improved data exploration to assist business users looking to gain insights from terabytes of streaming data without writing code.
  • Data Activator that will integrate with the Real-Time hub, event streams, real-time dashboards and KQL query sets, to make it seamless to trigger on any patterns or changes in real-time data.
  • AI-powered insights, now with an integrated Microsoft Copilot in Fabric experience for generating queries, in preview, and a one-click anomaly detection experience, allowing users to detect unknown conditions beyond human scale with high granularity in high-volume data, in private preview.
  • Event-Driven Fabric will allow users to respond to system events that happen within Fabric and trigger Fabric actions, such as running data pipelines.

New capabilities and updates to Microsoft Fabric

Microsoft Fabric, the unified data platform for analytics in the era of AI, is a powerful solution designed to elevate apps, whether a user is a developer, part of an organization or an independent software vendor (ISV). Updates to Fabric include:

  • Fabric Workload Development Kit: When building an app, it must be flexible, customizable and efficient. Fabric Workload Development Kit will make this possible by enabling ISVs and developers to extend apps within Fabric, creating a unified user experience.This feature is now in preview.
  • Fabric Data Sharing feature: Enables real-time data sharing across users and apps. The shortcut feature API allows seamless access to data stored in external sources to perform analytics without the traditional heavy integration tax. The new Automation feature now streamlines repetitive tasks resulting in less manual work, fewer errors and more time to focus on the growth of the business. These features are now in preview.
  • GraphQL API and user data functions in Fabric: GraphQL API in Fabric is a savvy personal assistant for data. It’s a RESTful API that will let developers access data from multiple sources within Fabric, using a single query. User data functions will enhance data processing efficiency, enabling data-centric experiences and apps using Fabric data sources like lakehouses, data warehouses and mirrored databases using native code ability, custom logic and seamless integration.These features are now in preview.
  • AI skills in Fabric: AI skills in Fabric is designed to weave generative AI into data specific work happening in Fabric. With this feature, analysts, creators, developers and even those with minimal technical expertise will be empowered to build intuitive AI experiences with data to unlock insights. Users will be able to ask questions and receive insights as if they were asking an expert colleague while honoring user security permissions.This feature is now in preview.
  • Copilot in Fabric: Microsoft is infusing Fabric with Microsoft Azure OpenAI Service at every layer to help customers unlock the full potential of their data to find insights. Customers can use conversational language to create dataflows and data pipelines, generate code and entire functions, build machine learning models or visualize results. Copilot in Fabric is generally available in Power BI and available in preview in the other Fabric workloads.

Azure Cosmos DB

Microsoft Azure Cosmos DB, the database designed for AI that allows creators to build responsive and intelligent apps with real-time data ingested and processed at any scale, has several key updates and new features that include:

  • Built-in vector database capabilities: Azure Cosmos DB for NoSQL will feature built-in vector indexing and vector similarity search, enabling data and vectors to be stored together and to stay in sync. This will eliminate the need to use and maintain a separate vector database. Powered by DiskANN, available in June, Azure Cosmos DB for NoSQL will provide highly performant and highly accurate vector search at any scale. This feature is now in preview.
  • Serverless to provisioned account migration: Users will be able to transition their serverless Azure Cosmos DB accounts to provisioned capacity mode. With this new feature, transition can be accomplished seamlessly through the Azure portal or Azure command-line interface (CLI). During this migration process, the account will undergo changes in-place and users will retain full access to Azure Cosmos DB containers for data read and write operations.This feature is now in preview.
  • Cross-region disaster recovery: With disaster recovery in vCore-based Azure Cosmos DB for MongoDB a cluster replica can be created in another region. This cluster replica will be continuously updated with the data written in the primary region. In a rare case of outage in the primary region and primary cluster unavailability, this replica can be promoted to become the new read-write cluster in another region. Connection string is preserved after such a promotion, so that apps can continue to read and write to the database in another region using the same connection string. This feature is now in preview.
  • Azure Cosmos DB Vercel integration: Developers building apps using Vercel can now connect easily to an existing Azure Cosmos DB database or create new Azure Try Cosmos DB accounts on the fly and integrate them to their Vercel projects. This integration improves productivity by creating apps easily with a backend database already configured. This also helps developers onboard to Azure Cosmos DB faster. This feature is now generally available.
  • Go SDK for Azure Cosmos DB: The Go SDK allows customers to connect to an Azure Cosmos DB for NoSQL account and perform operations on databases, containers and items. This release brings critical Azure Cosmos DB features for multi-region support and high availability to Go, such as the ability to set preferred regions, cross-region retries and improved request diagnostics. This feature is now generally available.

Click here to read the Microsoft Build 2024 Book of News!

Enjoy!

AIAzureEventsLearning

Microsoft Build 2023 Book of News

What is the Book of News? The Microsoft Build 2023 Book of News is your guide to the key news items that are announced at Build 2023.

As expected there is a lot of focus on Azure and AI, followed by Microsoft 365, Security, Windows, and Edge & Bing. This year the book of news is interactive instead of being a PDF.

Some of my favourite announcements

Azure Cloud Native and Application Platform

Azure API Management

  • Azure API Center: A new service that will enable organizations to centralize and manage their portfolio of APIs, regardless of type, life cycle or deployment location. This update is in preview. Learn more about updates to Azure API Center.
  • WebSocket API passthrough: Allows users to manage, protect, observe and expose WebSocket APIs running in container environments with the API Management self-hosted gateway container. This is now generally available. Learn more about WebSocket API passthrough.
  • Self-hosted gateway support for Azure Active Directory (Azure AD) tokens: Users will be able to secure communication between the self-hosted gateway to Azure to download configuration using Azure AD tokens. This allows customers to avoid manually refreshing a gateway token that expires every 30 days. This update is generally available.

Azure Event Grid

  • Leverage HTTP to enable “pull” delivery of discrete events to provide more flexible consumption patterns at high scale.
  • Enable publish-subscribe via the MQTT protocol, enabling bidirectional communication at scale between Internet of Things (IoT) devices and cloud-based services.
  • Enables routing MQTT data to other Azure services and third-party services for further data analytics and storage.

These updates are now in preview. Learn more about public preview of MQTT protocol and pull message delivery in Azure Event Grid.

Azure Functions

  • Deploy containerized Azure Functions in an Azure Container Apps environment to quickly build event-driven, cloud-native apps leveraging built-in Dapr integrations for distributed, microservice-based serverless apps.
  • Maximize developer velocity using Azure Functions integrated programming model, write code using preferred programming language or framework that Azure Functions supports and get the built-in service integrations with triggers and bindings for a first-class, event-driven, cloud-native experience.
  • Run Azure Functions alongside other microservices, APIs, websites, workflows or any containerized app using an Azure Container Apps environment built for robust serverless scale, microservices and fully managed infrastructure.

Azure Kubernetes Service

  • Long-term support is now generally available, starting with Kubernetes 1.27. Once enabled, this provides a two-year support window for a specific version of Kubernetes. Kubernetes delivers new releases every three to four months to keep up with the pace of innovation in the cloud-native world. To give enterprises more control over their environment, long-term support for Kubernetes enables customers to stay on the same release for two years – twice as long as what’s possible today. This is a long-awaited development in the cloud-native, open-source community for customers who need the additional option for a longer upgrade runway with continued support and security updates.
  • Transactable Kubernetes apps, now generally available, allow AKS customers to explore a vibrant ecosystem of first- and third-party Kubernetes-ready solutions from Azure Marketplace, and purchase and securely deploy them on AKS with easy click-through deployments. Conveniently integrated with Azure billing, these solutions are ready to use, taking advantage of all the benefits of running on a cloud-native platform like AKS.
  • Confidential containers in AKS, now in preview, is a first-party offering that will allow teams to run standard unmodified containers, aligned with the Kata Confidential Containers open-source project, to achieve zero trust operator deployments with AKS. These containers can be integrated with the typical services used by apps running on AKS for monitoring, logging, etc. in a trusted execution environment (TEE), with each pod assigned its own memory encryption key, providing hardware-based confidentiality and integrity protections, underscoring Microsoft’s focus on enterprise-readiness for these workloads. Learn more about confidential containers in AKS.
  • The multi-cluster update in Azure Kubernetes Fleet Manager (Fleet), in preview, will enable multi-cluster and at-scale scenarios for AKS clusters. The new multi-cluster update feature gives teams the ability to orchestrate planned updates across multiple clusters for a consistent environment.

Azure Communication Services

  • A new set of application programming interfaces (APIs), generally available next month, will help developers build server-based, intelligent calling workflows into their apps and simplify the delivery of personalized customer engagement with additional AI capabilities from Azure Cognitive Services.
  • Call automation interoperability into Microsoft Teams will be in preview next month for businesses that want to connect experts who use Teams into existing customer service calls. From daily appointment bookings and order updates to complex customer outreach for marketing and customer service, call automation with Azure Communication Services is changing the landscape of customer engagement.

Azure Data Platform

Microsoft Fabric

  • Microsoft Fabric, now in preview, delivers an integrated and simplified experience for all analytics workloads and users on an enterprise-grade data foundation. It brings together Power BI, Data Factory and the next generation of Synapse in a unified software as a service (SaaS) offering to give customers a price-effective and easy-to-manage modern analytics solution for the era of AI. Fabric has experiences for all workloads and data professionals in one place – including data integration, data engineering, data warehousing, data science, real-time analytics, applied observability and business intelligence – to increase productivity like never before.
  • To further enable organizations to accelerate value creation with their data, Microsoft is integrating Copilot in Microsoft Fabric, in preview soon, to enable the use of natural language and a chat experience to generate code and queries, create AI plugins using a low/no-code experience, enable custom Q&A, tailor semantics and components within the plugin and deploy to Microsoft Teams, Power BI and web. With AI-driven insights, customers can focus on telling the right data story and let Copilot do the heavy lifting.
  • Organizational data is hosted on Microsoft’s unified foundation, OneLake, which provides a single source of truth and reduces the need to extract, move or replicate data, helping eliminate rogue data sprawl. Fabric also enables persistent data governance and a single capacity pricing model that scales with growth, and it’s open at every layer with no proprietary lock-ins. Deep integrations with Microsoft 365, Teams and AI Copilot experiences accelerate and scale data value creation for everyone. From data professionals to non-technical business users, Fabric has role-tailored experiences to empower everyone to unlock more value from data.

Azure Cosmos DB

  • ​Burst capacity: Developers can achieve better performance and productivity with burst capacity, which allows customers to utilize the idle throughput capacity of their database or container to handle traffic spikes. Databases using standard provisioned throughput with burst capacity enabled will be able to maintain performance during short bursts when requests exceed the throughput limit. This gives customers a cushion if they’re under-provisioned and allows them to experience fewer rate-limited requests. This update is generally available.
  • ​Hierarchical partition keys: More efficient partitioning strategies and improved performance are made possible by hierarchical partition keys, which enable up to three partition keys to be used instead of one. This removes the performance trade-offs that developers often face when having to choose a single partition key and enables more optimal data distribution and high scale. This is generally available.
  • ​Materialized Views for Azure Cosmos DB for NoSQL: With Materialized Views, now in preview, users will be able to create and maintain secondary views of their data in containers that are used to serve queries that would be too expensive to serve with an existing container. Materialized Views can easily create and maintain data between two containers.
  • ​Azure Cosmos DB “All versions and deletes” change feed mode: Developers will be able to get a full view of changes to items occurring within the continuous backup retention period of their account, saving time and reducing app complexity. This is in preview.
  • ​.NET and Java SDKs Telemetry + App Insights: Monitoring apps will be easier with this update, now in preview. The Azure Cosmos DB .NET and Java SDKs support distributed tracing to help developers easily monitor their apps and troubleshoot issues, thereby improving performance and developer productivity.

Windows Platform

  • After listening to developer feedback, Microsoft created a home for developers on Windows with a renewed focus on productivity and performance across all stages of the development lifecycle. These features, now in preview, include:
  • Dev Home will allow users to quickly set up their machines, connect to GitHub and monitor and manage workflows in one central location. Dev Home is open source and extensible, allowing users to enhance their experience with a customizable dashboard and the tools they need to be successful. Users can also add GitHub widgets to track projects and system widgets to track CPU and GPU performance.
  • Windows Package Manager now includes WinGet configuration, which handles the setup requirements for an ideal development environment on a Windows machine using a WinGet configuration file, reducing device setup time from days to hours. Developers no longer need to worry about searching for the right version of the software, packages, tools or frameworks to download or settings to apply. WinGet configuration reduces this manual and error-prone process down to a single command with a WinGet configuration file.
  • Dev Drive is a new type of storage volume designed to provide developers with a file system that meets their needs for both performance and security. It is based on the Resilient File System (ReFS) and combined with a new performance mode capability in Microsoft Defender Antivirus provides up to 30% performance improvement in build times for file input/output (I/O) scenarios over the in-market Windows 11 version. The new performance mode is more secure for developer workloads than folder or process exclusions, providing a solution that balances security with performance.
  • Windows Terminal is getting smarter with GitHub Copilot X. Users of GitHub Copilot will be able to take advantage of natural language AI both inline and in an experimental chat experience to recommend commands, explain errors and take actions within the Terminal app. Microsoft is also experimenting with GitHub Copilot-powered AI in other developer tools like WinDBG to help developers’ complete tasks with less toil.

Click here to read the Microsoft Build 2023 Book of News!

Enjoy!