This week Microsoft is kicking off its Ignite 2018 conference in Orlando, Florida. Which is taking happening from September 24-29, 2018. If you’re like me and not there in person you can still watch all the keynotes and sessions live online or watch it later on-demand. Some of the expected big highlights will be around AI, Azure, IT, Microsoft 365, Data and so much more.
From high-level strategy and deep product dives to hands-on labs and immersive experiences—the insights and connections you’ll gain at Microsoft Ignite are great for your company and your career.
Join the conversation and follow along online with the Twitter hashtag #MSIgnite
Microsoft Ignite 2018
What is a Cloud Developer Advocate? Their a global group of passionate developers that advocate to help solve problems with the cloud. Jeremy Likness wrote up a great post on what is a Cloud Developer Advocate that is great to read and he himself is a Cloud Developer Advocate. These folks are here to help and support you, so don’t be shy and reach out and connect with them. I’ve learned so much from them and have had the pleasure to meet and speak in person at a few events like the Microsoft Tech Summit, Microsoft Build and the Global Azure Summit.
To see a list of the current Cloud Developer Advocates and how you can to reach out and connect with them, please head over to the Cloud Developer Advocates page.
Azure Advocates on Twitter
Cloud Developer Advocates Website
What is a Cloud Developer Advocate?
Thinking about getting started with AI can be a daunting task. Thankfully there is a free e-book called A Developer’s Guide to Building AI Applications that is available to help get you started.
Artificial intelligence (AI) is accelerating the digital transformation for every industry, with examples spanning manufacturing, retail, finance, healthcare, and many others. At this rate, every industry will be able to use AI to amplify human ingenuity. In this e-book, Anand Raman and Wee Hyong Tok from Microsoft provide a comprehensive roadmap for developers to build their first AI-infused application.
This e-book provides an easy introduction to the tooling, infrastructure, and services provided by Microsoft AI Platform for creating powerful, intelligent applications. With this e-book you will learn the key ingredients needed to develop an intelligent chatbot. In addition you will also…
- Understand how the intersection of cloud, data, and AI is enabling organizations to build intelligent systems.
- Learn the tools, infrastructure, and services available as part of the Microsoft AI Platform for developing AI applications.
- Teach the Conference Buddy application new AI skills, using pre-built AI capabilities such as vision, translation, and speech.
- Learn about the Open Neural Network Exchange.
Download your copy now.
The Insider Dev Tour is coming to Toronto on June 25, 2018 and will be held at the Microsoft Canada office in Mississauga, Ontario. This is a full day event where you can come and learn about Machine Learning (ML), Modern Desktop Apps, Fluent Design, Artificial Intelligence, Progressive Web Apps (PWA), Microsoft Graph, Teams, Mixed Reality, Extending Office 365, and so much more! For the Toronto event register for free at https://www.insiderdevtour.com/Toronto?ocid2=spark . For other locations see this link http://aka.ms/idevtour .
The Insider Dev Tour is for developers interested in building Microsoft 365 experiences today, using the latest developer technologies, as well as for those who want a peek into the future. If you can read code, this is for you, regardless if you are a beginner, expert, student, or hobbyist developer.
The tour is a great opportunity to connect directly with leads and engineers from Microsoft (Redmond), as well as regional industry leads and Microsoft Developer MVPs. If you missed out on attending the Microsoft Build 2018 conference then this is a great opportunity to follow up on some of that same content.
Register today for free at https://www.insiderdevtour.com/Toronto?ocid2=spark .
Enjoy and I hope to see you there.
Today’s keynote by Joe Belfiore was focused on Multi-sense + Multi-device for Microsoft 365, which is Windows, Office and EMS
- Fluent Design System updates.
- UWP XAML Islands, which lets you incorporate UWP into WinForms, WPF and Win32 applications. This also means you can start to bring in the Fluent Design System into these UI frameworks.
- Windows UI Library, which brings native platform controls as NuGet packages instead of being tied to the OS version. This will work from the Windows Anniversary Update and newer.
- .NET Core 3.0, which will support side-by-side runtimes, along with support for WinForms, WPF and UWP.
- MSIX, which is dubbed the best technology for installing applications on Windows. This inherits the Universal Windows Platform (UWP) features, works across Enterprise or Store distributions, and supports all Windows applications.
- Windows SDK Insider Preview – https://aka.ms/winsdk
- New developer revenue sharing model. Developers will get 85% when their app is found in the Microsoft store, and 95% when you provide your customers to your app in the Microsoft store.
- Microsoft Launcher on Android will support Timeline for cross-device application launching. On iOS this will be supported through Microsoft Edge.
- A new “Your Phone” experience coming soon to Windows 10 that enables you to see your connected phone text messages, photos and notifications and then interact with them without having to use your phone. Really neat experience – now if only they support Windows Mobile 10
- Microsoft Sets was officially shown and demonstrated how it can be used for an easier way to organize your work and allow you to get back to work where you left off when ready. This means not having to have 25+ tabs open in Chrome or Edge. Nice!
- Adaptive Cards is being added to Microsoft 365, which will enable developers to create rich interactive content within conversations. They demonstrated a GitHub Adaptive Card for Outlook (365) where you could comment and close an issue. Another example shown was paying for your invoice from an email.
- There was a lot of buzz for Microsoft Graph, which is core to the Microsoft 365 platform. Microsoft Graph helps developers connect the dots between people, schedules, conversations, and content within the Microsoft cloud.
- Cortana and Alexa start speaking to one another. Sometime in the future you will be able to access your Alexa device through Windows 10 and likewise on an Amazon Echo you will ne able to speak to Cortana.
Modernizing applications for our multi-sense, multi-device world
Microsoft 365 empowers developers to build intelligent apps for where and how the world works
This is my first attendance at the annual Microsoft Build conference taking place in Seattle, WA. I have to tell you that so far I’m not disappointed. Here are some of the highlights from today’s events:
- Azure is becoming the world’s computer: Azure | Azure Stack | Azure IoT Edge | Azure Sphere.
- Azure IoT Edge runtime which runs on Windows or Linux is now being open sourced.
- Microsoft showed off Cortana and Alexa integration which was pretty cool.
- New Azure AI infrastructure announced: Project Brainwave which is a real-time AI on cloud and edge devices.
- Announced Project Kinect for Azure, an Azure AI-enabled edge device.
- Visual Studio Live Share is now generally available. This provides real-time collaborative development, shared debugging, independent views and works across Visual Studio and Visual Studio Code (Windows, Mac and Linux).
- Azure Event Grid is getting new improvements like dead lettering (DLQ) and custom retry policies. Event Grid is also adding new event publishers for Azure Media Services and Azure Container Registry, and new event handlers for Storage Queue and Relay Hybrid Connections. Finally Azure Event Grid is providing an alternative form of endpoint validation. Event Grid provides reliable event delivery at massive scale (millions of events per second), and it eliminates long polling and hammer polling, and the associated costs of latency.
- Azure Cosmos had some interesting updates like the new multi-master write support. It also provides API support for MongoDB, SQL, Table Storage, Gremlin Graph, Spark, and Casandra.
- Azure Search now integrates Azure Cognitive Services to provide built-in enrichment of content using AI models, and it enables immersive search experiences over any data.
- The Fluent Design System which Microsoft first debuted at Build 2017, is expanding beyond Universal Windows Platform (UWP) apps and will be available for Windows Forms, WPF and native Win32 applications.
- Windows Timeline is coming to iOS and Android.
- Azure Functions updates: Durable Functions reaches general availability, and Azure Functions now leverages the App Service Diagnostics.
- .NET Core 3.0 and .NET Framework 4.8 announced were announced, and .NET Core 3.0 is coming to desktop development (awesome!)
- Visual Studio 2017 version 15.7 and the next update version 15.8 preview 1 were released.
- Visual Studio App Center integration with GitHub.
- Visual Studio IntelliCode announced, which brings you the next generation of developer productivity by providing AI-assisted development.
This already feels like a lot but really it’s just scratching the surface. I’m looking forward to what is announced today in the keynote followed by more technical workshops and sessions.
In part 1 of this series I showed you how to create a Face API subscription from Microsoft Cognitive Services and then use the Face API library to detect faces from an image. In this post we’ll expand on the previous to include Face Verification. Let’s get started.
Picking up where we left off, we will want to detect the most prominent face from an image and then use the detected face and verify it to see if they belong to the same person.
1. I refactored the code in the BrowsePhoto method to return an image that was selected. This method is then used by both the Identification and Verification images processes.
2. I refactored the UI to show 2 different images files, so means there is now 2 click events to identify the person in the image and then use this identification to verify its the same person when we load up another image. Both of these events can be seen here:
3. Finally we will be using the Face API VerifyAsync method to check to faces and determine if they belong to the same person.
4. Now let’s run the application across a few images and see how well it performs with two images of me from different years. In the first result I have an image from 10+ years ago and the Face API has come back that its 66% certain it’s the same person.
How about using something more recent. In this next test run the Face API again detects its 75% certain its the same person.
As you can see I’m able to use the Face API from Microsoft Cognitive Services to not only detect by also verify identity. The Face API provides other methods that can be used for grouping, people together and training it to recognize specific people with their identification method.The Face API has also recently been updated to support a large group of people (1,000 in the free tier and over 1,000,000 in the paid tier).
Face API Documentation