Earlier this week, Microsoft released TypeScript 6.0. This is a major milestone for the language, not because of what it adds, but instead, this release is significant because it represents the final major version built on the existing JavaScript-based codebase. Starting with TypeScript 7.0, the language is heading into a new era.
A Release Designed for Transition
According to Microsoft’s announcement, TypeScript 6.0 is primarily focused on preparing developers for the upcoming architectural shift. Beginning with version 7.0, the TypeScript team will:
Rewrite the compiler and language tooling in Go
Deliver native performance improvements
Introduce shared-memory multithreading
Move away from the legacy JavaScript implementation entirely
This makes 6.0 less of a feature-driven release and more of a bridge to the future.
What’s New in TypeScript 6.0
While transitional in nature, the release still includes several meaningful updates:
Updated DOM types to align with the latest web standards
Improved inference for contextually sensitive functions
Support for subpath imports, enabling cleaner module resolution
A new migration-assist flag to help developers prepare for the 6.0 to 7.0 upgrade path
These improvements aim to smooth the road ahead as the ecosystem prepares for the Go-based compiler.
Deprecations
Microsoft notes that several features are now deprecated in 6.0 and will be fully removed in TypeScript 7.0. These changes reflect the evolving JavaScript ecosystem and the need to modernize the language’s foundations. Developers can still use deprecated features in 6.0, but they should expect migration work before adopting 7.0.
In my Running and Building Azure Functions with Modern .NET talk this week at the Mississauga .NET User Group, one of the topics that consistently gets a reaction from the audience is Central Package Management (CPM). Once I show people what it does, the reaction is almost always the same: “I didn’t know this existed — I need to go add this to all my solutions.”
This post is the written companion to that section of the talk. If you’ve ever dealt with the pain of keeping NuGet package versions in sync across a large solution, CPM is going to be immediately relevant to you.
What is Central Package Management?
Central Package Management is a built-in .NET feature that lets you define all NuGet package versions in a single place — a Directory.Packages.props file at the solution root — rather than scattering Version attributes across individual .csproj files.
Here’s a simple example of what that file looks like:
The version is automatically resolved from the central file. Clean, simple, and immediately obvious what version you’re on when you look at the central file.
Why This Matters
In a solution with many projects, keeping package versions in sync manually is error-prone. You end up with Project A on Newtonsoft.Json 13.0.1 and Project B on 13.0.3 without anyone really noticing until a subtle runtime difference bites you. CPM solves this by making version drift impossible — there’s one place to look and one place to change.
Benefits at a glance:
Single source of truth for all package versions in the solution
Eliminates version drift across projects
Cleaner .csproj files — no version attributes cluttering your package references
Supports both direct and transitive dependency version control
Manually Migrate an Existing Solution to CPM
If you already have a solution with a bunch of projects, migrating to CPM is straightforward but can be tedious to do by hand across many .csproj files. Here’s the approach:
Step 1: Create Directory.Packages.props
At the root of your solution, create a Directory.Packages.props file with ManagePackageVersionsCentrally set to true and consolidate all your package versions as <PackageVersion> entries:
Strip the Version attribute from every <PackageReference> in your .csproj files. If you have many projects, do this with the CentralisedPackageConverter CLI tool (covered below) — don’t do it by hand.
Step 3: Validate and Build
Build your solution and confirm everything resolves correctly. Pay attention to any packages that may have conflicting versions across projects — those will need a conscious decision about which version to standardize on.
This is great, but there must be a better way. Let’s take a look at a community tool that automates this process.
Using the CentralisedPackageConverter CLI Tool
The best way to handle the migration for an existing solution is the CentralisedPackageConverter CLI tool, which automates the tedious parts:
Generate a Directory.Packages.props with all discovered versions
Remove version attributes from individual project files
I highly recommend using this over a manual migration. It’s fast and reduces the chance of missing something.
Let’s try this out.
We now have a Directory.Packages.props with all discovered versions:
And if we look in our project files, the versions are removed:
Advanced Features
Once you’re on CPM, there are a few additional capabilities worth knowing about.
Overriding Versions per Project
There are situations where a specific project needs a different version of a package than the rest of the solution — say, a legacy integration that can’t move to the latest version yet. CPM handles this with VersionOverride in the individual project file:
Use this sparingly. Its presence in a project file is a signal that something needs attention.
Different Versions per Target Framework
If you have a multi-targeted project, you can conditionally apply different versions by target framework using standard MSBuild conditions within Directory.Packages.props:
This one quietly solves a very real problem. Version drift caused by transitive dependencies — packages your packages depend on — is easy to miss and can cause subtle compatibility issues. CPM supports pinning transitive dependencies centrally, so you stay in control of the full dependency graph, not just the packages you reference directly.
Summary
Central Package Management is one of those features that makes you wonder how you managed without it once you adopt it. If you’re running a multi-project .NET solution, this is a straightforward improvement with immediate payoff — consistent package versions, cleaner project files, and a single place to make dependency updates.
In my Running and Building Azure Functions with Modern .NET talk this week at the Mississauga .NET User Group, I closed out the .NET tooling section with a quick look at the new .slnx solution file format. It’s one of those changes that doesn’t get a lot of attention but makes day-to-day .NET development noticeably more pleasant — especially if you’ve ever dealt with a gnarly merge conflict in a .sln file.
This post walks through what .slnx is, why it’s better than the traditional .sln format, and how to migrate.
What’s Wrong with .sln?
If you’ve worked with Visual Studio solutions for any length of time, you’ve almost certainly encountered the pain points with .sln files. They’ve been around since the early 2000s and they work — but they come with a set of frustrations that have never really been addressed:
They’re nearly unreadable — the format uses GUIDs and magic identifiers that aren’t intuitive at all
Merging is painful — a .sln file being touched by two developers at the same time is a merge conflict waiting to happen
Tooling struggles with them — custom scripts or CI tooling that needs to parse a .sln file often resorts to fragile string manipulation
Here’s a snippet of a traditional .sln to illustrate:
That’s it. No GUIDs. No magic identifiers. No indecipherable global sections. Just a clean XML file that clearly describes what’s in the solution.
Why It’s Better
Better readability — the XML structure is immediately understandable. Any developer can open a .slnx file and know exactly what’s in the solution without reverse-engineering the format.
Simplified merging — because it’s XML with simple, meaningful elements rather than a blob of GUIDs and platform strings, merge conflicts in .slnx files are much easier to resolve. In most cases, merging two developers’ changes to the solution file becomes a trivial diff.
Easier parsing — if you write build scripts, CI automation, or any tooling that needs to know what projects are in a solution, parsing a .slnx file is now just standard XML parsing. Reliable and straightforward.
Converting from .sln to .slnx
The .NET CLI makes the migration trivially easy. From your solution folder, run:
dotnet sln migrate
From Visual Studio 2022
Go to File and save your solution file as…and select the XML Solution File format.
Before You Migrate
A few things to check before running the migration:
Commit your existing .sln file to source control before converting — gives you a clean rollback point if anything looks off.
Make sure your entire team is on a compatible toolchain — .NET SDK 9.0.200 or later, and a recent version of Visual Studio 2022 or Visual Studio 2026. Anyone still on an older setup won’t be able to open the solution until they update.
If you have CI/CD pipelines that parse or manipulate the .sln file directly (not uncommon in larger teams), update those to handle .slnx before you switch over.
Should You Migrate?
My honest take is yes, especially for new solutions. For existing solutions, there’s no urgency — your .sln files aren’t going anywhere — but if you’re already bumping into merge conflict pain or your solution file is getting unwieldy, the migration is trivially easy, and the payoff is immediate.
For new projects, I’d start with .slnx from day one. In fact if you’ve migrated over to Visual Studio 2026 it’s now the default options. It’s the better format, it’s supported by all current tooling, and you’ll thank yourself later.
Summary
The .slnx solution file format is a small change with a noticeably positive impact on day-to-day .NET development. XML-based, human-readable, merge-friendly, and trivially easy to adopt — there’s very little reason not to switch. Combined with Central Package Management and the new Azure Functions FunctionsApplication builder, these three improvements together represent a meaningfully cleaner modern .NET development experience.
José Simões’ new book, Embedded Systems with nanoFramework, is a milestone for anyone who’s ever wanted to bring the power and comfort of C# into the world of microcontrollers. What I love about this work is how it breaks down the traditional barriers of embedded development—complex toolchains, steep learning curves, and hardware‑specific code—and replaces them with a modern, flexible, developer‑friendly approach.
At its core, the book shows how the .NET nanoFramework lets you build IoT and embedded solutions quickly, cleanly, and affordably. You can prototype in hours, adapt to customer needs on the fly, and move across ESP32.
José brings deep experience as the founder of the nanoFramework and a multi‑year Microsoft MVP, and it shows.
For a deeper dive on the book, checkout Sander’s post where he goes more in depth.
Last week, on December 4th, I presented at the Metro Toronto Azure Community meetup, discussing Microsoft’s announcements for Azure IoT Operations (AIO) at Ignite 2025. It was a fantastic evening — fellow MVPs Cliff Agius, Sander Van De Velde, Pete Gallagher, and Jose Simoes also took the stage with their own sessions on various Azure IoT topics.
IoT is a hobby interest of mine, so I genuinely enjoy keeping an eye on what’s happening in this space. When the Ignite announcements dropped, I was already deep in the details, which made putting the session together a lot of fun. This post is the written companion to that talk — a handy reference if you attended and want to revisit anything, or a full walkthrough if you missed it.
What is Azure IoT Operations?
If you’re new to Azure IoT Operations, let me give you a quick grounding before we jump into the announcements. AIO is AI-ready infrastructure for intelligent, adaptive operations. I describe it as more than a data pipeline — it serves as the foundation for integrating AI into the physical world. It enables systems that can perceive, reason, and act, which is precisely what modern industrial environments need to drive real operational efficiency.
What makes AIO stand out:
Built on Arc-enabled Kubernetes, ensuring a consistent management plane whether you’re on-premises, at the edge, or in the cloud
Unifies OT and IT data across distributed sites — effectively breaking down those frustrating silos between operational and business systems
Provides a repeatable, scalable platform that you can deploy across sites without starting from scratch each time
Extends familiar Azure management concepts to physical locations, which is significant for teams that are already acquainted with Azure.
Ignite 2025 Announcements at a Glance
The Ignite 2025 announcements for Azure IoT Operations are centered on three significant themes:
New edge-to-cloud orchestration capabilities
Tighter integration with Microsoft Fabric and Foundry
AI-driven observability and governance tools
Let’s explore each of the specific features that were announced.
Wasm-Powered Data Graphs
Azure IoT Operations now supports WebAssembly (Wasm)-powered data graphs, delivering fast, modular analytics right at the edge — eliminating the need to round-trip data to the cloud to get a decision back.
Wasm’s lightweight, sandboxed execution model is a natural fit for edge environments where compute is constrained, and every millisecond of latency matters. The modular nature of data graphs allows you to compose them from reusable pieces and deploy them consistently across diverse hardware profiles. For industrial scenarios requiring near real-time responses, this represents a significant advancement.
Expanded Connector Support
This release expands the connector library significantly. The newly supported connectors include:
OPC UA: Industrial automation and SCADA systems
ONVIF: IP-based physical security cameras and devices
REST/HTTP: General-purpose web API integration
Server-Sent Events (SSE): Real-time event streaming from HTTP sources
Direct MQTT: Lightweight pub/sub messaging for IoT devices
This expanded set is a big deal for organizations that need to bridge industrial OT environments with modern IT systems without building custom middleware for every integration.
Data Flows Now Support OpenTelemetry
This is one of those updates that might not make headlines, but practitioners will appreciate it immediately. AIO data flows now include native OpenTelemetry (OTel) endpoint support.
OpenTelemetry has become the de facto standard for distributed tracing, metrics, and logging across the industry. Having AIO speak OpenTelemetry natively means you can route telemetry from edge devices directly into whatever observability platform you’re already using — Azure Monitor, Grafana, Datadog, you name it — without any additional transformation layers. Cleaner pipelines, less glue code.
Device Support in Azure Device Registry
Azure Device Registry (ADR) got a meaningful upgrade here: devices are now treated as first-class resources within ADR namespaces.
In practice, this means:
You can logically isolate devices within namespaces — critical for multi-tenant or multi-site deployments where you need clear boundaries
RBAC can be applied at scale, so the right teams get the right level of access to the right devices without ad hoc workarounds
Device management now aligns with the same resource model used everywhere else in Azure, which makes governance much more consistent
Automatic Device and Asset Discovery
If you’ve ever had to manually provision devices across a large factory floor, you know how painful it can be. This announcement addresses that head on. AIO now includes Akri-powered automatic discovery and onboarding:
Continuously detects devices and industrial assets that appear on the network
Automatically provisions and onboards newly discovered devices
Gets telemetry flowing with minimal manual setup
For large-scale deployments, this can dramatically compress rollout timelines and free up your team from repetitive provisioning work. It’s the kind of operational improvement that compounds over time.
Microsoft Named a Leader in the 2025 Gartner® Magic Quadrant
I want to close the announcements on a high note. At Ignite 2025, Microsoft shared that it had been named a Leader in the 2025 Gartner® Magic Quadrant for Global Industrial IoT Platforms. As someone who works closely in this space, I think this recognition is well-deserved and reflects how much AIO has matured as a platform over the past couple of years.
Summary
Azure IoT Operations is moving fast, and the Ignite 2025 announcements show that Microsoft is serious about making it the go-to platform for intelligent, AI-driven operations at the edge. From Wasm-powered analytics and a broader connector library, to native OTel support and automated device discovery, there’s something here for nearly every team working in the industrial IoT space. I’m excited to see what comes next.
The Microsoft AI Skills Fest is a 50-day learning event, running from April 8 until May 28, 2025. It’s all about leveling up AI skills with tailored content for tech pros, business managers, students, and public sector workers.
AI Agents for Tech Pros – Dive into Azure AI Foundry & GitHub Copilot to create and fine-tune AI-driven tools.
AI for Business Managers – Learn practical AI strategies that boost workplace efficiency. Leads to a pro certificate.
Students & AI Literacy – Minecraft Education’s Fantastic Fairgrounds introduces AI concepts in an engaging way.
AI in the Public Sector – Explore responsible AI, security, and decision-making tools to improve government services.
Skill Challenges – Compete with global learners in AI challenges.
Certification Discounts – Massive exam discounts & a chance to grab one of 50,000 free vouchers.
Training Sessions – Exclusive workshops from Microsoft Training Services Partners.
Festival Guide – A map to explore AI zones and experiences.
I received an exciting email from Microsoft this month that I was re-awarded for the 7th year for the 2024 – 2025 Microsoft Most Valuable (MVP) award in Azure (Cloud Native). Receiving the Microsoft MVP award is both a humbling and exciting experience. It means you’re a member of a select group of experts of just over 3,000 MVPs from around the world. Still, I like to think of it as doing something I’m passionate about with other like-minded individuals, having fun and always having something new to learn and share with the community.
“The Microsoft MVP Award is an annual award that recognizes exceptional technology community leaders worldwide who actively share their high-quality, real-world expertise with users and Microsoft. All of us at Microsoft recognize and appreciate Callon’s extraordinary contributions and want to take this opportunity to share our appreciation with you.” – The Microsoft Most Valuable Professional (MVP) Award Team Microsoft Corporation
If you’re interested in learning about the Microsoft MVP program and seeing what it takes to become a Microsoft MVP, or how to get awarded, I encourage you to take a look at the Microsoft MVP website and also the following article on “How to become a Microsoft MVP” where they explain some of the details of the program.
To wrap up this post I would like to congratulate all the other newly awarded or renewed Microsoft MVPs all over the world! You truly are an amazing community and I’m truly humbled and honoured to be part of this group.
This week, Microsoft announced the public preview of geo-replication for Azure Event Hubs. Geo-replication enhances Microsoft Azure data availability and geo-disaster recovery capabilities by enabling the replication of Event Hubs data payloads across different Azure regions.
With geo-replication, your client applications continue to interact with the primary namespace. Customers can designate a secondary region, choose replication consistency (synchronous or asynchronous), and set replication lag for the data. The service handles the replication between primary and secondary regions. If a primary change is needed (for maintenance or failover), the secondary can be promoted to primary, seamlessly servicing all client requests without altering any configurations (connection strings, authentication, etc.). The former primary then becomes the secondary, ensuring synchronization between both regions.
In summary, geo-replication is designed to provide you with the following benefits:
High availability: You can ensure that your data is always accessible and durable, even in the event of a regional outage or disruption. You can also reduce the impact of planned maintenance events by switching to the secondary region before the primary region undergoes any updates or changes.
Disaster recovery: You can recover your data quickly and seamlessly in case of a disaster that affects your primary region. You can initiate a failover to the secondary region and resume your data streaming operations with minimal downtime and data loss.
Regional compliance: You can meet the regulatory and compliance requirements of your industry or region by replicating your data to a secondary region that complies with the same or similar standards as your primary region. You can also leverage the geo-redundancy of your data to support your business continuity and resilience plans.
How to get started with Azure Event Hubs Geo-replication?