Category: Azure

.NETAIAzureCloudCommunity

Festive Tech Calendar 2024 YouTube playlist

Enjoy!

Reference

https://festivetechcalendar.com/

AzureAzure Event Hubs

Boost Data Reliability with Geo-Replication for Azure Event Hubs

This week, Microsoft announced the public preview of geo-replication for Azure Event Hubs. Geo-replication enhances Microsoft Azure data availability and geo-disaster recovery capabilities by enabling the replication of Event Hubs data payloads across different Azure regions.

With geo-replication, your client applications continue to interact with the primary namespace. Customers can designate a secondary region, choose replication consistency (synchronous or asynchronous), and set replication lag for the data. The service handles the replication between primary and secondary regions. If a primary change is needed (for maintenance or failover), the secondary can be promoted to primary, seamlessly servicing all client requests without altering any configurations (connection strings, authentication, etc.). The former primary then becomes the secondary, ensuring synchronization between both regions.

In summary, geo-replication is designed to provide you with the following benefits:

  • High availability: You can ensure that your data is always accessible and durable, even in the event of a regional outage or disruption. You can also reduce the impact of planned maintenance events by switching to the secondary region before the primary region undergoes any updates or changes.
  • Disaster recovery: You can recover your data quickly and seamlessly in case of a disaster that affects your primary region. You can initiate a failover to the secondary region and resume your data streaming operations with minimal downtime and data loss.
  • Regional compliance: You can meet the regulatory and compliance requirements of your industry or region by replicating your data to a secondary region that complies with the same or similar standards as your primary region. You can also leverage the geo-redundancy of your data to support your business continuity and resilience plans.

How to get started with Azure Event Hubs Geo-replication?

If you want to try out Azure Event Hubs Geo-replication, please check out the official documentation over at Azure Event Hubs Geo-replication documentation and they also have a demo here.

I look forward to when this becomes GA and is available in more regions.

Enjoy!

References

https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/announcing-public-preview-for-geo-replication-for-azure-event/ba-p/4164522

Azure Event Hubs Geo-replication documentation

AnalyticsAzure

Kusto’s 10-Year Evolution at Microsoft

Kusto, the internal service driving Microsoft’s telemetry and several key services, recently marked its 10-year milestone. Over the decade, Kusto has evolved significantly, becoming the backbone for crucial applications such as Sentinel, Application Insights, Azure Data Explorer, and more recently, Eventhouse in Microsoft Fabric. This journey highlights its pivotal role in enhancing data processing, monitoring, and analytics across Microsoft’s ecosystem.

This powerful service has continually adapted to meet the growing demands of Microsoft’s internal and external data needs, underscoring its importance in the company’s broader strategy for data management and analysis.

A Dive into Azure Data Explorer’s Origins

Azure Data Explorer (ADX), initially code-named “Kusto,” has a fascinating backstory. In 2014, it began as a grassroots initiative at Microsoft’s Israel R&D center. The team wanted a name that resonated with their mission of exploring vast data oceans, drawing inspiration from oceanographer Jacques Cousteau. Kusto was designed to tackle the challenges of rapid and scalable log and telemetry analytics, much like Cousteau’s deep-sea explorations.

By 2018, ADX was officially unveiled at the Microsoft Ignite conference, evolving into a fully-managed big data analytics platform. It efficiently handles structured, semi-structured (like JSON), and unstructured data (like free-text). With its powerful querying capabilities and minimal latency, ADX allows users to swiftly explore and analyze data. Remembering its oceanic roots, ADX symbolizes a tribute to the spirit of discovery.

Enjoy!

References

AzureAzure Event Hubs

Azure Event Hubs Unveils Large Message Support

This week Microsoft announced in public preview, support for large messages (up to 20 MB) in Azure Event Hubs in its self-service scalable dedicated clusters, enhancing its capabilities to handle a wide range of message sizes without additional costs.

This new feature allows for seamless streaming of large messages without requiring any client code changes, maintaining compatibility with existing Event Hubs SDKs and the Kafka API. This enhancement ensures uninterrupted business operations by accommodating instances where messages cannot be divided into smaller segments. The service continues to offer high throughput and low latency, making it a robust solution for data streaming needs.

What are some cases for large message support?

Here are some key use cases for the new large message support in Azure Event Hubs:

  • Multimedia Streaming: Handling large video, audio, or image files that cannot be split into smaller segments.
  • Data Aggregation: Transmitting aggregated data sets or logs that exceed typical message size limits.
  • IoT Applications: Streaming large sensor data or firmware updates from IoT devices.
  • Batch Processing: Sending large batches of data for processing without needing to break them down.

These enhancements ensure seamless and uninterrupted business operations across various scenarios.

How do you enable large message support?

To enable large message support in your existing Azure Event Hubs setup, follow these steps:

  1. Use Self-Serve Scalable Dedicated Clusters: Ensure your Event Hubs are built on the latest infrastructure that supports self-serve scalable dedicated clusters. If you are using Event Hubs, then you will need to create an Event Hub Cluster to take advantage of large message support.
  2. No Client Code Changes Needed: You can continue using your existing Event Hubs SDK or Kafka API. The only change required is in the message or event size itself.

For more detailed instructions, visit the documentation at aka.ms/largemessagesupportforeh.

How do Azure Event Hubs differ from Azure Event Hub Clusters?

Azure Event Hubs and Event Hub Clusters serve different purposes within the Azure ecosystem:

  • Azure Event Hubs: This is a fully managed, real-time data ingestion service that can receive and process millions of events per second. It’s designed for high-throughput data streaming and is commonly used for big data and analytics.
  • Azure Event Hub Clusters: These are dedicated clusters that provide isolated resources for Event Hubs. They offer enhanced performance, scalability, and the ability to handle large messages (up to 20 MB). Clusters are ideal for scenarios requiring high throughput and low latency.

Enjoy!

References

https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/announcing-large-message-support-for-azure-event-hubs-public/ba-p/4146455

https://learn.microsoft.com/en-us/azure/event-hubs/event-hubs-quickstart-stream-large-messages

https://learn.microsoft.com/en-us/azure/event-hubs/compare-tiers

Azure Event Hubs Overview

AzureDeveloperEvents

Microsoft Build 2024 Book of News

What is the Book of News? The Microsoft Build 2024 Book of News is your guide to the key news items announced at Build 2024.

As expected there is a lot of focus on Azure and AI, followed by Microsoft 365, Security, Windows, and Edge & Bing. This year the book of news is interactive instead of being a PDF.

Some of my favourite announcements

Azure Cloud Native and Application Platform

Azure Functions

Microsoft Azure Functions is launching several new features to provide more flexibility and extensibility to customers in this era of AI.

Features now in preview include:

  • A Flex Consumption plan that will give customers more flexibility and customization without compromising on available features to run serverless apps.
  • Extension for Microsoft Azure OpenAI Service that will enable customers to easily infuse AI in their apps. Customers will be able to use this extension to build new AI-led apps like retrieval-augmented generation, text completion and chat assistant.
  • Visual Studio Code for the Web will provide a browser-based developer experience to make it easier to get started with Azure Functions. This feature is available for Python, Node and PowerShell apps in the Flex Consumption hosting plan.

Features now generally available include:

  • Azure Functions on Azure Container Apps lets developers use the Azure Container Apps environment to deploy multitype services to a cloud-native solution designed for centralized management and serverless scale.
  • Dapr extension for Azure Functions enables developers to use Dapr’s powerful cloud native building block APIs and a large array of ecosystem components in the native and friendly Azure Functions triggers and bindings programming model. The extension is available to run on Azure Kubernetes Service and Azure Container Apps.

Azure Container Apps

Microsoft Azure Container Apps will include dynamic sessions, in preview, for AI app developers to instantly run large language model (LLM)-generated code or extend/customize software as a service (SaaS) apps in an on-demand, secure sandbox.

Customers will be able to mitigate risks to their security posture, leverage serverless scale for their apps and save months of development work, ongoing configurations and management of compute resources that reduce their cost overhead. Dynamic sessions will provide a fast, sandboxed, ephemeral compute suitable for running untrusted code at scale.

Additional new features, now in preview, include:

  • Support for Java: Java developers will be able to monitor the performance and health of apps with Java metrics such as garbage collection and memory usage.
  • Microsoft .NET Aspire dashboard: With dashboard support for .NET Aspire in Azure Container Apps, developers will be able to access live data about projects and containers in the cloud to evaluate the performance of .NET cloud-native apps and debug errors.

Azure App Service

Microsoft Azure App Service is a cloud platform to quickly build, deploy and run web apps, APIs and other components. These capabilities are now in preview:

  • Sidecar patterns is a way to add extra features to the main app, such as logging, monitoring and caching, without changing the app code. Users will be able to run these features alongside the app and it is supported for both source code and container-based deployments.
  • WebJobs will be integrated with Azure App Service, which means they will share the same compute resources as the web app to help save costs and ensure consistent performance. WebJobs are background tasks that run on the same server as the web app and can perform various functions, such as sending emails, executing bash scripts and running scheduled jobs.
  • GitHub Copilot skills for Azure Migrate will enable users to ask questions like, “Can I migrate this app to Azure?” or “What changes do I need to make to this code?” to get answers and recommendations from Azure Migrate. GitHub Copilot licenses are sold separately.

These capabilities are now generally available:

  • Automatic scaling continuously adjusts the number of servers that run apps based on a combination of demand and server utilization, without any code or complex scaling configurations. This helps users handle dynamically changing site traffic without over-provisioning or under-provisioning the app’s server resources.
  • Availability zones are isolated locations within an Azure region that provide high availability and fault tolerance. Enabling availability zones lets users take advantage of the increased service level agreement (SLA) of 99.99%. For more information, reference the SLA for App Service.
  • TLS 1.3 encryption, the latest version of the protocol that secures communication between apps and the clients, offers faster and more secure connections, as well as better compatibility with modern browsers and devices.

Azure Static Web Apps

To help customers deliver more advanced capabilities, Microsoft Azure Static Web Apps will offer a dedicated pricing plan, now in preview, that supports enterprise-grade features for enhanced networking and data storage. The dedicated plan for Azure Static Web Apps will utilize dedicated compute capacity and will enable:

  • Network isolation to enhance security.
  • Data residency to help customers comply with data management policies and requirements.
  • Enhanced quotas to allow for more custom domains within an app service plan.
  • “Always-on” functionality for Azure Static Web Apps managed functions, which provide built-in API endpoints to connect to backend services.

Azure Logic Apps

Microsoft Azure Logic Apps is a cloud platform where users can create and run automated workflows with little to no code. Updates to the platform include:

An enhanced developer experience:

  • Improved onboarding experience in Microsoft Visual Studio Code: A simplified extension installation experience and improvements on project start and debugging are now generally available.
  • Logic Apps Standard deployment scripting tools in Visual Studio Code: This feature will simplify the process of setting up a continuous integration/continuous delivery (CI/CD) process for Logic Apps Standard by providing support in the tooling to generalize common metadata files and automate the creation of infrastructure scripts to streamline the task of preparing code for automated deployments. This feature is in preview.
  • Support for Zero Downtime deployment scenarios: This will enable Zero Downtime deployment scenarios for Logic Apps Standard by providing support for deployment slots in the portal. This update is in preview.

Expanded functionality and compatibility with Logic Apps Standard:

  • .NET Custom Code Support: Users will be able to extend low-code workflows with the power of .NET 8 by authoring a custom function and calling from a built-in action within the workflow. This feature is in preview.
  • Logic Apps connectors for IBM mainframe and midranges: These connectors allow customers to preserve the value of their workloads running on mainframes and midranges by allowing them to extend to the Azure Cloud without investing more resources in the mainframe or midrange environments using Azure Logic Apps. This update is generally available.
  • Other updates, in preview, include Azure Integration account enhancements and Logic Apps monitoring dashboard.

Azure API Center

Microsoft Azure API Center, now generally available, provides a centralized solution to manage the challenges of API sprawl, which is exacerbated by the rapid proliferation of APIs and AI solutions. The Azure API Center offers a unified inventory for seamless discovery, consumption and governance of APIs, regardless of their type, lifecycle stage or deployment location. This enables organizations to maintain a complete and current API inventory, streamline governance and accelerate consumption by simplifying discovery.

Azure API Management

Azure API Management has introduced new capabilities to enhance the scalability and security of generative AI deployments. These include the Microsoft Azure OpenAI Service token limit policy for fair usage and optimized resource allocation, one-click import of Azure OpenAI Service endpoints as APIs, a Load Balancer for efficient traffic distribution and a Circuit breaker to protect backend services.

Other updates, now generally available, include first-class support for OData API type, allowing easier publication and security of OData APIs, and full support for gRPC API type in self-hosted gateways, facilitating the management of gRPC services as APIs.

Azure Event Grid

Microsoft Azure Event Grid has new features that are tailored to customers who are looking for a pub-sub message broker that can enable Internet of Things (IoT) solutions using MQTT protocol and can help build event-driven apps. These capabilities enhance Event Grid’s MQTT broker capability, make it easier to transition to Event Grid namespaces for push and pull delivery of messages, and integrate new sources. Features now generally available include:

  • Use the Last Will Testament feature, in compliance with MQTT v5 and MQTT v.3.1.1 specifications, so apps receive notifications when clients get disconnected, enabling management of downstream tasks to prevent performance degradation.
  • Create data pipelines that utilize both Event Grid Basic resources and Event Grid Namespace Topics (supported in Event Grid Standard). This means customers can utilize Event Grid namespace capabilities, such as MQTT broker, without needing to reconstruct existing workflows.
  • Support new event sources, such as Microsoft Entra ID and Microsoft Outlook, leveraging Event Grid’s support for the Microsoft Graph API. This means customers can use Event Grid for new use cases, like when a new employee is hired or a new email is received, to process that information and send to other apps for more action.

Azure Data Platform

Real-Time Intelligence in Microsoft Fabric

The new Real-Time Intelligence within Microsoft Fabric will provide an end-to-end software as a service (SaaS) solution that will empower customers to act on high volume, time-sensitive and highly granular data in a proactive and timely fashion to make faster and more-informed business decisions. Real-Time Intelligence, now in preview, will empower user roles such as everyday analysts with simple low-code/no-code experiences, as well as pro developers with code-rich user interfaces.

Features of Real-Time Intelligence will include:

  • Real-Time hub, a single place to ingest, process and route events in Fabric as a central point for managing events from diverse sources across the organization. All events that flow through Real-Time hub will be easily transformed and routed to any Fabric data stores.
  • Event streams that will provide out-of-the-box streaming connectors to cross cloud sources and content-based routing that helps remove the complexity of ingesting streaming data from external sources.
  • Event house and real-time dashboards with improved data exploration to assist business users looking to gain insights from terabytes of streaming data without writing code.
  • Data Activator that will integrate with the Real-Time hub, event streams, real-time dashboards and KQL query sets, to make it seamless to trigger on any patterns or changes in real-time data.
  • AI-powered insights, now with an integrated Microsoft Copilot in Fabric experience for generating queries, in preview, and a one-click anomaly detection experience, allowing users to detect unknown conditions beyond human scale with high granularity in high-volume data, in private preview.
  • Event-Driven Fabric will allow users to respond to system events that happen within Fabric and trigger Fabric actions, such as running data pipelines.

New capabilities and updates to Microsoft Fabric

Microsoft Fabric, the unified data platform for analytics in the era of AI, is a powerful solution designed to elevate apps, whether a user is a developer, part of an organization or an independent software vendor (ISV). Updates to Fabric include:

  • Fabric Workload Development Kit: When building an app, it must be flexible, customizable and efficient. Fabric Workload Development Kit will make this possible by enabling ISVs and developers to extend apps within Fabric, creating a unified user experience.This feature is now in preview.
  • Fabric Data Sharing feature: Enables real-time data sharing across users and apps. The shortcut feature API allows seamless access to data stored in external sources to perform analytics without the traditional heavy integration tax. The new Automation feature now streamlines repetitive tasks resulting in less manual work, fewer errors and more time to focus on the growth of the business. These features are now in preview.
  • GraphQL API and user data functions in Fabric: GraphQL API in Fabric is a savvy personal assistant for data. It’s a RESTful API that will let developers access data from multiple sources within Fabric, using a single query. User data functions will enhance data processing efficiency, enabling data-centric experiences and apps using Fabric data sources like lakehouses, data warehouses and mirrored databases using native code ability, custom logic and seamless integration.These features are now in preview.
  • AI skills in Fabric: AI skills in Fabric is designed to weave generative AI into data specific work happening in Fabric. With this feature, analysts, creators, developers and even those with minimal technical expertise will be empowered to build intuitive AI experiences with data to unlock insights. Users will be able to ask questions and receive insights as if they were asking an expert colleague while honoring user security permissions.This feature is now in preview.
  • Copilot in Fabric: Microsoft is infusing Fabric with Microsoft Azure OpenAI Service at every layer to help customers unlock the full potential of their data to find insights. Customers can use conversational language to create dataflows and data pipelines, generate code and entire functions, build machine learning models or visualize results. Copilot in Fabric is generally available in Power BI and available in preview in the other Fabric workloads.

Azure Cosmos DB

Microsoft Azure Cosmos DB, the database designed for AI that allows creators to build responsive and intelligent apps with real-time data ingested and processed at any scale, has several key updates and new features that include:

  • Built-in vector database capabilities: Azure Cosmos DB for NoSQL will feature built-in vector indexing and vector similarity search, enabling data and vectors to be stored together and to stay in sync. This will eliminate the need to use and maintain a separate vector database. Powered by DiskANN, available in June, Azure Cosmos DB for NoSQL will provide highly performant and highly accurate vector search at any scale. This feature is now in preview.
  • Serverless to provisioned account migration: Users will be able to transition their serverless Azure Cosmos DB accounts to provisioned capacity mode. With this new feature, transition can be accomplished seamlessly through the Azure portal or Azure command-line interface (CLI). During this migration process, the account will undergo changes in-place and users will retain full access to Azure Cosmos DB containers for data read and write operations.This feature is now in preview.
  • Cross-region disaster recovery: With disaster recovery in vCore-based Azure Cosmos DB for MongoDB a cluster replica can be created in another region. This cluster replica will be continuously updated with the data written in the primary region. In a rare case of outage in the primary region and primary cluster unavailability, this replica can be promoted to become the new read-write cluster in another region. Connection string is preserved after such a promotion, so that apps can continue to read and write to the database in another region using the same connection string. This feature is now in preview.
  • Azure Cosmos DB Vercel integration: Developers building apps using Vercel can now connect easily to an existing Azure Cosmos DB database or create new Azure Try Cosmos DB accounts on the fly and integrate them to their Vercel projects. This integration improves productivity by creating apps easily with a backend database already configured. This also helps developers onboard to Azure Cosmos DB faster. This feature is now generally available.
  • Go SDK for Azure Cosmos DB: The Go SDK allows customers to connect to an Azure Cosmos DB for NoSQL account and perform operations on databases, containers and items. This release brings critical Azure Cosmos DB features for multi-region support and high availability to Go, such as the ability to set preferred regions, cross-region retries and improved request diagnostics. This feature is now generally available.

Click here to read the Microsoft Build 2024 Book of News!

Enjoy!

AnalyticsAzureAzure Data Explorer

Discovering Insights with Azure Data Explorer

For the past few months, I’ve been diving into learning Azure Data Explorer (ADX) and using it for a few projects. What is Azure Data Explorer, and what would I use it for? Great questions. Azure Data Explorer is like your data’s best friend when it comes to real-time, heavy-duty analytics. It’s built to handle massive amounts of data—whether it’s structured, semi-structured, or all over the place—and turn it into actionable insights. With its star feature, the Kusto Query Language (KQL), you can dive deep into the data for tasks like spotting trends, detecting anomalies, or analyzing logs, all with ease. It’s perfect for high-speed data streams, making it a go-to for IoT and time-series data. Plus, it’s secure, scalable, and does the hard work fast so you can focus on making more intelligent decisions.

When to use Azure Data Explorer

Azure Data Explorer is ideal for enabling interactive analytics capabilities over high-velocity, diverse raw data. Use the following decision tree to help you decide if Azure Data Explorer is right for you:

What makes Azure Data Explorer unique

Azure Data Explorer stands out due to its exceptional capabilities in handling vast amounts of diverse data quickly and efficiently. It supports high-speed data ingestion (terabytes in minutes) and querying of petabytes with millisecond-level results. Its Kusto Query Language (KQL) is intuitive yet powerful, enabling advanced analytics and seamless integration with Python and T-SQL. With specialized features for time series analysis, anomaly detection, and geospatial insights, it’s tailored for deep data exploration. The platform simplifies data ingestion with its user-friendly wizard, while built-in visualization tools and integrations with Power BI, Grafana, Tableau, and more make insights accessible. It also automates data ingestion, transformation, and export, ensuring a smooth, end-to-end analytics experience.

Writing Kusto queries

In Azure Data Explorer, we use the Kusto Query Language (KQL) to write queries. KQL is also used in other Azure services like Azure Monitor Log AnalyticsAzure Sentinel, and many more. 

  • A Kusto query is a read-only request to process data and return results.
  • Has one or more query statements and returns data in a tabular or graph format.
  • Statements are sequenced by a pipe (|).
  • Data flows, or is piped, from one operator to the next.
  • The data is filtered/manipulated at each step and then fed into the following step.
  • Each time the data passes through another operator, it’s filtered, rearranged, or summarized.

Here is the above query:

StormEvents
| where StartTime >= datetime(2007-11-01)
| where StartTime <= datetime(2007-12-01)
| where State == 'FLORIDA'
| count

Azure Data Explorer query editor also supports the use of T-SQL in addition to its primary query language, Kusto query language (KQL). While KQL is the recommended query language, T-SQL can be useful for tools that are unable to use KQL. For more details, check out how to query data with T-SQL.

Using commands to manage Azure Data Explorer tables

When it comes to writing commands for managing tables, the first character of the text of a request determines if the request is a management command or a query. Management commands must start with the dot (.) character, and no query may start with that character.

Here are some examples of management commands:

  • .create table
  • .create-merge table
  • .drop table
  • .alter table
  • .rename column

Getting started

You can try Azure Data Explorer for free using the free cluster. Head over to https://dataexplorer.azure.com/ and log in with any Microsoft Account.

Navigate to the My cluster tab on the left to get access to your cluster URI.

Next, let’s create a new database. While on the My cluster tab, click on the create database button. Give your database a name. In this case, I’m using ‘Demo1’ and then click on the ‘NextCreateDatebase’ button.

Now navigate over to the Query table and lets create our first table, insert some data and run some queries.

Creating a table

.create-merge table customers
(
    FullName: string, 
    LastOrderDate: datetime,
    YtdSales: decimal,
    YtdExpenses: decimal,
    City: string,
    PostalCode: string
)

If we run the .show table customers command, we can see the table definition:

.show table customers

Ingesting data

There are several ways we can ingest data into our table. Here are a few options:

  • Ingest from Azure Storage
  • Ingest from a Query
  • Streaming Ingestion
  • Ingest Inline
  • Ingest from an application

Today we’re going to be using the inline ingestion, which goes as follows:

.ingest inline into table customers <| 
'Bill Gates', datetime(2022-01-10 11:00:00), 1000000, 500000, 'Redmond', '98052'
'Steve Ballmer', datetime(2022-01-06 10:30:00), 150000, 50000, 'Los Angeles', '90305'
'Satya Nadella', datetime(2022-01-09 17:25:00), 100000, 50000, 'Redmond', '98052'
'Steve Jobs', datetime(2022-01-04 13:00:00), 100000, 60000, 'Cupertino', '95014'
'Larry Ellison', datetime(2022-01-04 13:00:00), 90000, 80000, 'Redwood Shores', '94065'
'Jeff Bezos', datetime(2022-01-05 08:00:00), 750000, 650000, 'Seattle', '98109'
'Tim Cook', datetime(2022-01-02 09:00:00), 40000, 10000, 'Cupertino', '95014'
'Steve Wozniak', datetime(2022-01-04 11:30:00), 81000, 55000, 'Cupertino', '95014'
'Scott Guthrie', datetime(2022-01-11 14:00:00), 2000000, 1000000, 'Redmond', '98052'

Querying data

Now, let’s start writing KQL queries against our data. In the following query I’m just using the name of the table with no where clause. This is similar to the “SELECT * FROM Customers” in SQL.

customers

Now let’s filter our data looking for customers where the YtdSales is less than $100,000:

customers
| where YtdSales < 100000

SQL to KQL

If you’re unfamiliar with KQL but are familiar with SQL and want to learn KQL, you can translate your SQL queries into KQL by prefacing the SQL query with a comment line, --, and the keyword explain. The output shows the KQL version of the query, which can help you understand the KQL syntax and concepts. Here is an example of the ‘EXPLAIN’ operator as follows:

Try the SQL to Kusto Query Language cheat sheet.

Wrapping up

In this post we looked at what Azure Data Explorer is, when it should be used, how to use the free personal cluster to create a sample database and ingest data and the run some queries. I hope this was insightful and I look forward to my next post where I’ll go deeper on ingesting data in real-time and running more complicated queries and how we can access this data from dashboards and APIs.

Enjoy!

References

Azure

Microsoft Azure now allows free egress of data transfer out to the internet when leaving Azure

Earlier this week, Microsoft announced that it would follow suit with Google and Amazon in eliminating Azure egress fees. The following outlines Azure’s commitment to customer choice and details the process for transferring data out of Azure with financial incentives.

Free Egress Offer

Azure now provides free data egress for customers leaving Azure, allowing them to transfer their data to another cloud provider or on-premises data center without incurring internet fees.

Eligibility and Process

The first 100GB/month is free globally. For additional data transfer, customers must contact Azure Support and follow specific instructions to be eligible for the credit.

Data Transfer Credit

Once the data transfer is complete and all associated Azure subscriptions are cancelled, Azure Support will apply the credit.

Compliance with Data Act

This policy aligns with the European Data Act and is available to all Azure customers worldwide.

Reference

https://azure.microsoft.com/en-us/updates/now-available-free-data-transfer-out-to-internet-when-leaving-azure

AzureEventsLearning

Microsoft Ignite 2023 Book of News

What is the Book of News? The Microsoft Ignite 2023 Book of News is your guide to the key news items that are announced at Ignite 2023.

AI, Copilot and Microsoft Fabric will have an overarching theme at this year’s conference as you will see throughout the sessions and announcements.

Some of my favourite announcements

Azure Cloud Native and Application Platform

Azure App Service

  • Single subnet support for multiple App Service plans is now generally available. Network administrators gain substantial reduction in management overhead thanks to the new capability enabling multiple service plans to connect to a single subnet in a customer’s virtual network.
  • WebJobs on Linux is now in preview. WebJobs is a popular feature of Azure App Service that enables users to run background tasks in the Azure App Service without any additional cost. Previously available on Windows, it will extend to Linux, enabling customers to run background or recurring tasks and do things like send email reports or perform image or file processing.
  • Extensibility support on Linux is now in preview. Previously available on Windows, it will allow Linux web apps to take advantage of third-party software services on Azure and connect to Azure Native ISV services more easily.
  • gRPC, a high-performance, open-source universal RPC framework that now provides full bi-directional streaming support and increased messaging performance over HTTP/2 for web apps running on App Service for Linux is generally available.

Azure Functions

  • Azure Functions now supports .NET 8 for applications using the isolated worker model. Support is now available for Windows and Linux on the consumption, elastic premium and application service plan hosting options. This update is generally available.
  • Flex Consumption Plan is a new Azure Functions hosting plan that will build on the consumption, pay-for-what’s-used, serverless billing model. It will provide more flexibility and customizability without compromising on available features. New capabilities will include fast and large elastic scale, instance size selection, private networking, availability zones and high concurrency control. Users can request access to the private preview.

Azure Container Apps

  • Dedicated GPU workload profiles: Users will be able to run machine learning models with Azure Container Apps as a target compute platform to build event driven intelligent applications to train models or derive data-driven insights. This feature is in preview.
  • Azure Container Apps landing zone accelerator: Simplifies building of a production-grade secured infrastructure at an enterprise scale to deploy fully managed, cloud-native apps and microservices. This feature is generally available.
  • Azure Container Apps code to cloud: Users will be able to focus on code and quickly take an application from source to cloud without the need to understand containers or how to package application code for deployment. This feature is in preview.
  • Vector database add-ins: Three of the most popular open-source vector database variants, Qdrant, Milvus and Weaviate, are now available in preview as add-ins for developers to get started in a fast and affordable way.

Azure Kubernetes Service

  • The release of Kubernetes AI toolchain operator automates LLM model deployment on AKS across available CPU and GPU resources by selecting optimally sized infrastructure for the model. It makes it possible to easily split inferencing across multiple lower-GPU-count VMs, increasing the number of Azure regions where workloads can run, eliminating wait times for higher-GPU-count VMs and lowering overall cost. Customers can also choose from preset models with images hosted by AKS, significantly reducing overall inference service setup time.
  • Additionally, Azure Kubernetes Fleet Manager enables multi-cluster and at-scale scenarios for AKS clusters. Platform admins who are managing Kubernetes fleets with many clusters often face challenges staging their updates in a safe and predictable way. This allows admins to orchestrate updates across multiple clusters by using update runs, stages and groups. This is now generally available.

Azure Event Grid

  • Azure Event Grid now supports additional capabilities to help customers capitalize on growing industry scenarios. A key part of this new functionality is the ability to deliver publish-subscribe messaging at scale, which enables flexible consumption patterns for data over HTTP and MQTT protocols. This capability is now generally available.
  • Pull delivery for event-driven architectures: This allows customers to process events from highly secure environments without configuring a public end point, controlling the rate and volume of messages consumed, while supporting much larger throughput. This feature is generally available.
  • Push delivery to Azure Event Hubs: Event Grid namespaces will support the ability to push events to Azure Event Hubs at high scale through a namespace topic subscription. This enables the development of more distributed applications to send discrete events to ingestion pipelines. This feature is in preview.
  • Increased throughput units: To help customers scale to meet the demands of these new scenarios, Event Grid has also increased the number of throughput units available in an Event Grid namespace to 40, meeting the needs of more data-intensive scenarios by providing more capacity. This feature is generally available.

Azure Communication Services

  • Azure AI Speech integration into Azure Communication Services Call Automation workflows, generally available in November, will enable AI-assisted experiences for customers calling into a business.
  • Azure Communication Services job router, generally available in early December, will simplify the development of routing capabilities for inbound customer communications and steer customers to the most suitable point of contact in a business.

Azure API Management

  • API Management’s Credential Manager, now generally available, simplifies the management of authentication and authorization for both professional developers and citizen developers.
  • Defender for APIs, a new offering as part of Microsoft Defender for Cloud – a cloud-native application protection platform (CNAPP), is now generally available. Natively integrating with Azure API Management, security admins gain visibility into the Azure business-critical APIs, understand and improve their security posture, prioritize vulnerability fixes and detect and respond to active runtime threats within minutes using machine learning-powered anomalous and suspicious API usage detections.

Azure Migrate

  • The Azure Migrate application and code assessment, now generally available, complements the Azure Migrate assessment and migration tool to help modernize and re-platform large-scale .NET and Java applications through detailed code and application scanning and dependencies detections. The tool offers a comprehensive report with recommended code changes for customers to apply a broad range of code transformations with different use cases and code patterns.

Azure Data Platform

General

  • Amazon S3 shortcuts, now generally available, allow organizations to unify their data in Amazon S3 with their data in OneLake. With this update, data engineers can create a single virtualized data lake for their entire organization across Amazon S3 buckets and OneLake – without the latency of copying data from S3 and without changing overall data ownership.
  • Azure Data Lake Storage Gen2 (ADLS Gen2) shortcuts are now generally available, empowering data engineers to connect to data from external data lakes in ADLS Gen2 into OneLake through a live connection with target data.

Azure SQL

  • Several new features and updates for Azure SQL will make the offering more cost-efficient, reliable and secure.

Microsoft Fabric

  • Microsoft Fabric, an integrated and simplified experience for a data estate on an enterprise-grade data foundation, is now generally available. Fabric enables persistent data governance and a single capacity pricing model that scales with growth, and it’s open at every layer with no proprietary lock-ins. Fabric integrates Power BI, Data Factory and the next generation of Synapse to offer customers a price-effective and easy-to-manage modern analytics solution for the era of AI.
  • Microsoft 365 data is now able to natively integrate to OneLake in the Delta Parquet format, the optimal format for data analysis. Microsoft 365 data was previously offered only in JSON format. With this new integration, Microsoft 365 data will be seamlessly joined with other data sources in OneLake, enabling access to a suite of analytical experiences for organizations to transform and gain insight from their data. This also means that AI capabilities built using Microsoft Fabric notebooks will now directly access Microsoft 365 data within OneLake. This update is in preview.
  • Microsoft Fabric is being infused with Azure OpenAI Service at every layer to help customers unlock the full potential of their data, enabling developers to leverage the power of generative AI against their data and assisting business users to find insights in their data. This feature is in preview.

Azure Cosmos DB

  • Dynamic scaling per partition/region, now in preview for new Azure Cosmos DB accounts, will allow customers to optimize for scale and cost in situations where partitioning is used to scale individual containers in a database to meet the performance needs of applications, or where multi-region configuration of Azure Cosmos DB is used for global distribution of data.
  • Microsoft Copilot for Azure integration in Azure Cosmos DB, now in preview, will bring AI into the Azure Cosmos DB developer experience. Specifically, this release enables developers to turn natural language questions into Azure Cosmos DB NoSQL queries in the query editor of Azure Cosmos DB Data Explorer. This new feature will increase developer productivity by generating queries and written explanations of the query operations as they ask questions about their data.
  • Azure Cosmos DB for MongoDB vCore, now generally available, allows developers to build intelligent applications in Azure with MongoDB compatibility. With Azure Cosmos DB for MongoDB vCore, developers can enjoy the benefits of native Azure integrations, low total cost of ownership and the familiar vCore architecture when migrating existing applications or building new ones. Azure Cosmos DB for MongoDB vCore is also introducing a free tier, which is a developer-friendly way to explore the platform’s capabilities without any cost. Learn more about the free tier.
  • In addition, a new Azure AI Advantage offer will help customers realize the value of Azure Cosmos DB and Azure AI together. Benefits include:
    • Savings up to 40,000 RU/s for three months on Azure Cosmos DB when using GitHub Copilot or Azure AI, including Azure OpenAI Service.
    • World-class infrastructure and security to grow business and safeguard data.
    • Enhanced reliability of generative AI applications by leveraging the speed of Azure Cosmos DB to retrieve and process data.
  • Vector search in Azure Cosmos DB MongoDB vCore, now generally available, allows developers to seamlessly integrate their AI-based applications with the data stored in Azure Cosmos DB. Vector search enables users to efficiently store, index and query high-dimensional vector data, eliminating the need to transfer the data to more expensive alternatives for vector search capabilities, such as vector databases.

SQL Server

  • Monitoring for SQL Server enabled by Azure Arc, now in preview, will allow customers to gain critical insights into their entire SQL Server estate across on-premises datacenter and cloud, optimize for database performance and diagnose problems faster. With this monitoring tool, customers will be empowered to switch from a reactive operation mode to a proactive one, further improving database uptime while reducing routine workloads.
  • Enhanced high availability and disaster recovery (HA/DR) management for SQL Server enabled by Azure Arc is now in preview. With Azure Arc, customers can now improve SQL Server business continuity and consistency by viewing and managing Always On availability groups, failover cluster instances and backups directly from the Azure portal. This new capability will provide customers with better visibility and a much easier and more flexible way to configure critical database operations.
  • Extended Security Updates for SQL Server enabled by Azure Arc is now generally available. Extended Security Updates for SQL Server, which provide critical security updates for up to three years after the end of extended support, are now available as a service through Azure Arc. With the Extended Security Update service, customers running older SQL Server versions on-premises or in multicloud environments can manage security patches from the Azure portal. Extended Security Updates enabled by Azure Arc give financial flexibility with a pay-as-you-go subscription model.

Azure Infrastructure

AI

  • Custom-built silicon for AI and enterprise workloads in the Microsoft Cloud
  • Today, Microsoft is announcing new custom silicon that complements Microsoft’s offerings with industry partners. The two new chips, Microsoft Azure Maia and Microsoft Azure Cobalt, were built with a holistic view of hardware and software systems to optimize performance and price.
  • Microsoft Azure Maia is an AI Accelerator chip designed to run cloud-based training and inferencing for AI workloads, such as OpenAI models, Bing, GitHub Copilot and ChatGPT.
  • Microsoft Azure Cobalt is a cloud-native chip based on Arm architecture optimized for performance, power efficiency and cost-effectiveness for general-purpose workloads.
  • Azure Boost is now generally available
  • One of Microsoft Azure’s latest and most significant infrastructure improvements, Azure Boost, is now generally available. Azure Boost enables greater network and storage performance at scale, improves security, and reduces servicing impact by moving virtualization processes traditionally performed by the host servers, such as networking, storage and host management, onto purpose-built hardware and software optimized for these processes. This innovation allows Microsoft to achieve the fastest remote and local storage performances in the market today, with a remote storage performance of 12.5 Gbps (gigabits per second) throughput and 650K IOPS (input/output operations per second) and a local storage performance of 17.3 Gbps throughput and 3.8M IOPS.
  • ND MI300 v5 virtual machines with AMD chips optimized for generative AI workloads
  • The ND MI300 v5 virtual machines are designed to accelerate the processing of AI workloads for high-range AI model training and generative inferencing, and will feature AMD’s latest GPU, the AMD Instinct MI300X.
  • NC H100 v5 virtual machines with the latest NVIDIA GPUs
  • The new NC H100 v5 Virtual Machine (VM) Series, in preview, is built on the latest NVL variant of the NVIDIA Hopper 100 (H100), which will offer greater memory per GPU. The new VM series will provide customers with greater performance, reliability and efficiency for mid-range AI training and generative AI inferencing. By maintaining more memory per GPU in the VM, customers increase data processing efficiency and enhance overall workload performance.

Azure Migrate

  • Azure Migrate, the service used to migrate to and modernize in Azure, is introducing discovery, business case analysis and assessment support for new workloads. This allows customers to analyze their configuration and compatibility for new use cases so they can determine appropriately sized Azure instances at optimal cost and without blockers.
  • Specific features, in preview, include Spring apps assessment, business case with management costs, business case and assessment with security and Windows and SQL ESU in business case and Web apps assessment, which is generally available.

Azure IoT Operations

Azure IoT Operations is a new addition to the Azure IoT portfolio that will offer a unified, end-to-end Microsoft solution that digitally transforms physical operations seamlessly from the cloud to the edge.

That unified approach consists of the following:

  • Management plane: One control plane to secure and govern assets and workloads across cloud to edge with Azure Arc.
  • Application development: Consistently build and deploy apps anywhere, in the cloud or at the edge.
  • Cloud-to-edge data plane: Seamless integration at the data level from asset to cloud and back again.
  • Common infrastructure: Customers can connect investments in the cloud with their on-premises resources.

Learn more about Accelerating Industrial Transformation with Azure IoT Operations

Azure Management and Operations

Azure Chaos Studio

Azure Chaos Studio, now generally available, provides a fully managed experimentation platform for discovering challenging issues through experiment templates, dynamic targets and a more guided user interface.

Azure AI Services

Azure AI Studio

  • Microsoft is launching the preview of its unified AI platform, Azure AI Studio, which will empower all organizations and professional developers to innovate and shape the future with AI.

Azure AI Vision

  • Liveness functionality and Vision SDK: Liveness functionality will help prevent face recognition spoofing attacks and conforms to ISO 30107-3 PAD Level 2. Vision SDK for Face will enable developers to easily add face recognition and liveness to mobile applications. Both features are in preview.
  • Image Analysis 4.0: This API introduces cutting-edge Image Analysis models, encompassing image captioning, OCR, object detection and more, all accessible through a single, synchronous API endpoint. Notably, the enhanced OCR model boasts improved accuracy for both typed and handwritten text in images. Image Analysis 4.0 is generally available.
  • Florence foundation model: Trained with billions of text-image pairs and integrated as cost-effective, production-ready computer vision services in Azure AI Vision, this improved feature enables developers to create cutting-edge, market-ready, responsible computer vision applications across various industries. Florence foundation model is generally available.

Azure Open AI Service

  • DALL·E 3: Imagine an AI model that can generate images from text descriptions. DALL·E 3 is a remarkable AI model that does just that. Users describe an image, and DALL·E 3 will be able to create it. DALL·E 3 is in preview.
  • GPT-3.5 Turbo model with a 16k token prompt length and GPT-4 Turbo: The latest models in Azure OpenAI Service will enable customers to extend prompt length and bring even more control and efficiency to their generative AI applications. Both models will be available in preview at the end of November 2023.
  • GPT-4 Turbo with Vision (GPT-4V): When integrated with Azure AI Vision, GPT-4V will enhance experiences by allowing the inclusion of images or videos along with text for generating text output, benefiting from Azure AI Vision enhancement like video analysis. GPT-4V will be in preview by the end of 2023.
  • GPT-4 updates: Azure OpenAI Service has also rolled out updates to GPT-4, including the ability for fine-tuning. Fine-tuning will allow organizations to customize the AI model to better suit their specific needs. It’s akin to tailoring a suit to fit perfectly, but in the world of AI. Updates to GPT-4 are in preview.

Azure AI Video Indexer

  • Video-to-text summary: Users will be able to extract the essence of video content and generate concise and informative text summaries. The advanced algorithm segments videos into coherent chapters, leveraging visual, audio and text cues to create sections that are easily accommodated in large language model (LLM) prompt windows. Each section contains essential content, including transcripts, audio events and visual elements. This is ideal for creating video recaps, training materials or knowledge-sharing.
  • Efficient Video Content Search: Users will be able to transform video content into a searchable format using LLMs and Video Indexer’s insights. By converting video insights into LLM-friendly prompts, the main highlights are accessible for effective searching. Scene segmentation, audio events and visual details further enhance content division, allowing users to swiftly locate specific topics, moments or details within extensive video.

This year’s Ignite was packed with lots of new announcements and features that I can’t wait to start using in my applications.

Enjoy!

Click here to read the Microsoft Ignite 2023 Book of News!

AIAzureEventsLearning

Microsoft Build 2023 Book of News

What is the Book of News? The Microsoft Build 2023 Book of News is your guide to the key news items that are announced at Build 2023.

As expected there is a lot of focus on Azure and AI, followed by Microsoft 365, Security, Windows, and Edge & Bing. This year the book of news is interactive instead of being a PDF.

Some of my favourite announcements

Azure Cloud Native and Application Platform

Azure API Management

  • Azure API Center: A new service that will enable organizations to centralize and manage their portfolio of APIs, regardless of type, life cycle or deployment location. This update is in preview. Learn more about updates to Azure API Center.
  • WebSocket API passthrough: Allows users to manage, protect, observe and expose WebSocket APIs running in container environments with the API Management self-hosted gateway container. This is now generally available. Learn more about WebSocket API passthrough.
  • Self-hosted gateway support for Azure Active Directory (Azure AD) tokens: Users will be able to secure communication between the self-hosted gateway to Azure to download configuration using Azure AD tokens. This allows customers to avoid manually refreshing a gateway token that expires every 30 days. This update is generally available.

Azure Event Grid

  • Leverage HTTP to enable “pull” delivery of discrete events to provide more flexible consumption patterns at high scale.
  • Enable publish-subscribe via the MQTT protocol, enabling bidirectional communication at scale between Internet of Things (IoT) devices and cloud-based services.
  • Enables routing MQTT data to other Azure services and third-party services for further data analytics and storage.

These updates are now in preview. Learn more about public preview of MQTT protocol and pull message delivery in Azure Event Grid.

Azure Functions

  • Deploy containerized Azure Functions in an Azure Container Apps environment to quickly build event-driven, cloud-native apps leveraging built-in Dapr integrations for distributed, microservice-based serverless apps.
  • Maximize developer velocity using Azure Functions integrated programming model, write code using preferred programming language or framework that Azure Functions supports and get the built-in service integrations with triggers and bindings for a first-class, event-driven, cloud-native experience.
  • Run Azure Functions alongside other microservices, APIs, websites, workflows or any containerized app using an Azure Container Apps environment built for robust serverless scale, microservices and fully managed infrastructure.

Azure Kubernetes Service

  • Long-term support is now generally available, starting with Kubernetes 1.27. Once enabled, this provides a two-year support window for a specific version of Kubernetes. Kubernetes delivers new releases every three to four months to keep up with the pace of innovation in the cloud-native world. To give enterprises more control over their environment, long-term support for Kubernetes enables customers to stay on the same release for two years – twice as long as what’s possible today. This is a long-awaited development in the cloud-native, open-source community for customers who need the additional option for a longer upgrade runway with continued support and security updates.
  • Transactable Kubernetes apps, now generally available, allow AKS customers to explore a vibrant ecosystem of first- and third-party Kubernetes-ready solutions from Azure Marketplace, and purchase and securely deploy them on AKS with easy click-through deployments. Conveniently integrated with Azure billing, these solutions are ready to use, taking advantage of all the benefits of running on a cloud-native platform like AKS.
  • Confidential containers in AKS, now in preview, is a first-party offering that will allow teams to run standard unmodified containers, aligned with the Kata Confidential Containers open-source project, to achieve zero trust operator deployments with AKS. These containers can be integrated with the typical services used by apps running on AKS for monitoring, logging, etc. in a trusted execution environment (TEE), with each pod assigned its own memory encryption key, providing hardware-based confidentiality and integrity protections, underscoring Microsoft’s focus on enterprise-readiness for these workloads. Learn more about confidential containers in AKS.
  • The multi-cluster update in Azure Kubernetes Fleet Manager (Fleet), in preview, will enable multi-cluster and at-scale scenarios for AKS clusters. The new multi-cluster update feature gives teams the ability to orchestrate planned updates across multiple clusters for a consistent environment.

Azure Communication Services

  • A new set of application programming interfaces (APIs), generally available next month, will help developers build server-based, intelligent calling workflows into their apps and simplify the delivery of personalized customer engagement with additional AI capabilities from Azure Cognitive Services.
  • Call automation interoperability into Microsoft Teams will be in preview next month for businesses that want to connect experts who use Teams into existing customer service calls. From daily appointment bookings and order updates to complex customer outreach for marketing and customer service, call automation with Azure Communication Services is changing the landscape of customer engagement.

Azure Data Platform

Microsoft Fabric

  • Microsoft Fabric, now in preview, delivers an integrated and simplified experience for all analytics workloads and users on an enterprise-grade data foundation. It brings together Power BI, Data Factory and the next generation of Synapse in a unified software as a service (SaaS) offering to give customers a price-effective and easy-to-manage modern analytics solution for the era of AI. Fabric has experiences for all workloads and data professionals in one place – including data integration, data engineering, data warehousing, data science, real-time analytics, applied observability and business intelligence – to increase productivity like never before.
  • To further enable organizations to accelerate value creation with their data, Microsoft is integrating Copilot in Microsoft Fabric, in preview soon, to enable the use of natural language and a chat experience to generate code and queries, create AI plugins using a low/no-code experience, enable custom Q&A, tailor semantics and components within the plugin and deploy to Microsoft Teams, Power BI and web. With AI-driven insights, customers can focus on telling the right data story and let Copilot do the heavy lifting.
  • Organizational data is hosted on Microsoft’s unified foundation, OneLake, which provides a single source of truth and reduces the need to extract, move or replicate data, helping eliminate rogue data sprawl. Fabric also enables persistent data governance and a single capacity pricing model that scales with growth, and it’s open at every layer with no proprietary lock-ins. Deep integrations with Microsoft 365, Teams and AI Copilot experiences accelerate and scale data value creation for everyone. From data professionals to non-technical business users, Fabric has role-tailored experiences to empower everyone to unlock more value from data.

Azure Cosmos DB

  • ​Burst capacity: Developers can achieve better performance and productivity with burst capacity, which allows customers to utilize the idle throughput capacity of their database or container to handle traffic spikes. Databases using standard provisioned throughput with burst capacity enabled will be able to maintain performance during short bursts when requests exceed the throughput limit. This gives customers a cushion if they’re under-provisioned and allows them to experience fewer rate-limited requests. This update is generally available.
  • ​Hierarchical partition keys: More efficient partitioning strategies and improved performance are made possible by hierarchical partition keys, which enable up to three partition keys to be used instead of one. This removes the performance trade-offs that developers often face when having to choose a single partition key and enables more optimal data distribution and high scale. This is generally available.
  • ​Materialized Views for Azure Cosmos DB for NoSQL: With Materialized Views, now in preview, users will be able to create and maintain secondary views of their data in containers that are used to serve queries that would be too expensive to serve with an existing container. Materialized Views can easily create and maintain data between two containers.
  • ​Azure Cosmos DB “All versions and deletes” change feed mode: Developers will be able to get a full view of changes to items occurring within the continuous backup retention period of their account, saving time and reducing app complexity. This is in preview.
  • ​.NET and Java SDKs Telemetry + App Insights: Monitoring apps will be easier with this update, now in preview. The Azure Cosmos DB .NET and Java SDKs support distributed tracing to help developers easily monitor their apps and troubleshoot issues, thereby improving performance and developer productivity.

Windows Platform

  • After listening to developer feedback, Microsoft created a home for developers on Windows with a renewed focus on productivity and performance across all stages of the development lifecycle. These features, now in preview, include:
  • Dev Home will allow users to quickly set up their machines, connect to GitHub and monitor and manage workflows in one central location. Dev Home is open source and extensible, allowing users to enhance their experience with a customizable dashboard and the tools they need to be successful. Users can also add GitHub widgets to track projects and system widgets to track CPU and GPU performance.
  • Windows Package Manager now includes WinGet configuration, which handles the setup requirements for an ideal development environment on a Windows machine using a WinGet configuration file, reducing device setup time from days to hours. Developers no longer need to worry about searching for the right version of the software, packages, tools or frameworks to download or settings to apply. WinGet configuration reduces this manual and error-prone process down to a single command with a WinGet configuration file.
  • Dev Drive is a new type of storage volume designed to provide developers with a file system that meets their needs for both performance and security. It is based on the Resilient File System (ReFS) and combined with a new performance mode capability in Microsoft Defender Antivirus provides up to 30% performance improvement in build times for file input/output (I/O) scenarios over the in-market Windows 11 version. The new performance mode is more secure for developer workloads than folder or process exclusions, providing a solution that balances security with performance.
  • Windows Terminal is getting smarter with GitHub Copilot X. Users of GitHub Copilot will be able to take advantage of natural language AI both inline and in an experimental chat experience to recommend commands, explain errors and take actions within the Terminal app. Microsoft is also experimenting with GitHub Copilot-powered AI in other developer tools like WinDBG to help developers’ complete tasks with less toil.

Click here to read the Microsoft Build 2023 Book of News!

Enjoy!

AzureThis week on Azure Friday

Azure Confidential Computing with Confidential VMs and AKS nodes | This week on Azure Friday

In this episode of Azure Friday, Amar Gowda joins Scott Hanselman to show how Azure Confidential Computing protects data in use and helps you achieve data security and data privacy goals within a managed cloud environment. Confidential VM’s protect VM-based workloads with memory encryption and code integrity for VM and container workloads. Attestation helps you remotely verify the entire VM is a hardware-based Trusted Execution Environment (TEE).

Chapters

  • 00:00 – Introduction
  • 01:20 – Azure Confidential Computing
  • 04:00 – Creating a confidential VM from CLI demo
  • 06:28 – Attestation sample client to verify hardware
  • 12:00 – Extensions and CVM on AKS support
  • 13:09 – AKS demo
  • 20:00 – Wrap-up

Source: Azure Friday

Resources