Category: Development

AIAzureDevelopment

Using the Face API from Microsoft Cognitive Services (part 2)–Face Verification

In part 1 of this series I showed you how to create a Face API subscription from Microsoft Cognitive Services and then use the Face API library to detect faces from an image. In this post we’ll expand on the previous to include Face Verification. Let’s get started.

Picking up where we left off, we will want to detect the most prominent face from an image and then use the detected face and verify it to see if they belong to the same person.

1. I refactored the code in the BrowsePhoto method to return an image that was selected. This method is then used by both the Identification and Verification images processes.

2. I refactored the UI to show 2 different images files, so means there is now 2 click events to identify the person in the image and then use this identification to verify its the same person when we load up another image. Both of these events can be seen here:

image

3. Finally we will be using the Face API VerifyAsync method to check to faces and determine if they belong to the same person.

image

4. Now let’s run the application across a few images and see how well it performs with two images of me from different years. In the first result I have an image from 10+ years ago and the Face API has come back that its 66% certain it’s the same person.

image

How about using something more recent. In this next test run the Face API again detects its 75% certain its the same person.

image

Wrap up

As you can see I’m able to use the Face API from Microsoft Cognitive Services to not only detect by also verify identity. The Face API provides other methods that can be used for grouping, people together and training it to recognize specific people with their identification method.The Face API has also recently been updated to support a large group of people (1,000 in the free tier and over 1,000,000 in the paid tier).

Enjoy!

References

Sample Code

Face API Documentation

AIAzureDevelopment

Using the Face API from Microsoft Cognitive Services (part 1)–Face Detection

Earlier this month I wrote about giving your applications a more human side with Microsoft Cognitive Services, which provides a number of API’s that you can start using immediately in your applications. Today I’ll dive into the vision API’s and show you how you can leverage the Face API to detect faces in your images.

What is the Face API?

The Face API provides facial and emotion recognition and location in an image. There are 5 main areas for this API:

– Face detection
– Face verification
– Find similar faces
– Face grouping
– Face identification

Potential uses for this technology include facial login, photo tagging, and home monitoring. You can also use it for attribute detection to know age, gender, facial hair, whether the person is wearing a hat, wearing glasses, or has a beard. This API can also be used to determine if two faces belong to the same person, identify previously tagged people, find similar-looking faces in a collection.

So let’s get started with creating an Face API resource and then a small application to detect faces. In the next post I’ll extend this example to do face verification to determine if it’s the same person.

Step 1 – Requirements

To get started with Microsoft Cognitive Services and specifically the Face API you will need to have an Azure Subscription. If you don’t have one you can get a free trial subscription which includes $250 of credits to be used for any Azure services.

You will also need to have Visual Studio 2017 installed, which you can download for free.

Step 2 – Subscribe to the Face API

1. Log in to the Azure portal and click on the Create a resource link in top left corner. From here select AI + Cognitive Services and then select Face API as shown here:

image

2. Give your Face API a name, select your subscription, location, resource group and then select the F0 Free tie for pricingr:

image

3. After a few seconds your Face API subscription will be created and ready for you to start using. At this point you will need to get two items, your subscription key and your endpoint location.

The endpoint URL is shown on the Overview section and your subscription keys are located under Keys in the Resource Management section as shown here:

image

Now that we have the subscription key and endpoint let’s create our application.

Step 3 – Create new Application and reference the Face API

1. Open Visual Studio and from the File menu, click on New then Project. From here you can select any type of application but for me I’m going to create a new WPF application in C#. This code will also work with Xamarin.Forms project if you wanted to try this out for mobile.

image

2. Go to the Solution Explorer pane in Visual Studio, right click your project and then click Manage NuGet Packages.

3. Click on the Include prerelease checkbox and then search for Microsoft.Azure.CognitiveServices.Vision.Face. You might be wondering why are these API’s still in preview? Well the Cognitive Services API’s were previously called Microsoft.ProjectOxford.* and are now being moved over to Microsoft.Azure.CognitiveServices.*. Once that migration is complete they should be out of prerelease and is what you should be using from then on.

image

4.Now let’s go to the code and configure the Face API client library.

Step 4 – Configure the Face API Client Library

1. Open up your MainWindow.cs file and declare a new FaceServiceClient instance as shown here

image

2. Insert your Face API subscription key and endpoint. Replace “YOUR-SUBSCRIPTION-KEY-GOES-HERE” with your subscription key from step 2. Do the same for the second parameter which is your endpoint URL.

Step 5 – Upload images, detect faces, and show facial attributes

I wont walk through the entire code as you can do that on my GitHub repository. Instead in this step I’ll show you how I used the Face API to detect the faces, draw a square around each detected face, and finally show you the facial attributes when the mouse hovers over a detected face.

It’s worth mentioning that the maximum size of the image to upload is 4 MB.

image

As highlighted above you will take a photo you have and upload it to the Face API where it will detect an array of faces. The largest face in the image is usually what is returned first in the array. Using the DetectAsync method, you have the option to pass in an IEnumerable of FaceAttributeTypes. Just declare a list of the attributes you want back in the results like so:

image

The second highlighted code shows were we store the facial attributes returned for each face. The GetFaceDescription method is used when you mouse over a detected face and you want to show the attributes that were returned from the Face API:

image

Now let’s run our application and try detecting some faces for an image containing one or more faces. After a few seconds the API will return back with the results. As you can see we’re drawing blue squares for the makes and pink for the females, and when you hover your mouse over one of the faces I’m displaying the description of all the facial attributes returned by the API.

image

Wrap up

As you can see its very easy to add AI to your application with Microsoft Cognitive Services. Today I showed you how you can leverage the Face API for facial recognition.

Enjoy!

Resources

Sample Code

Face API Documentation

Development

Updates to the New Project Dialog in Visual Studio 2017

With the release of Visual Studio 2017 Update 6 (version 15.6.x), you might have noticed that there was an update done to the New Project dialog to move the .NET Framework version selector down to the bottom below the solution name. This makes the New Project dialog cleaner and shows you which version of the .NET Framework will be used when creating your project.

image

The Framework selector disappears when selecting non .NET Framework projects like ASP.NET Core, UWP, etc.

Enjoy!

Development

Getting Started with Application Insights for ASP.NET Core

In my previous posts I gave a quick Introduction to Application Insights and then I showed you how to Disable Application Insights from your app. In this post I’ll walk you through creating an ASP.NET Core application and then configuring it with Application Insights. Let’s get started.

Configuring your app for Application Insights

Start by creating a new ASP.NET Core application (this also applied to non-core ASP.NET applications). Once the application is created right click on the project file in the context menu look for Configure Application Insights… and then click on it.

image

You will see that the SDK has already been added to your application. Next click on the Start Free button to start using Application Insights.

image

You will need to have an existing Azure Subscription. If you don’t already have one you can create one for free and start with a $250 credit for 30 days + you will have access to popular services for 12 months + there are over 25 services that are always free. Now that you have your Azure Subscription, login with your Microsoft Account, select your Subscription and then a Resource. These can always be easily changed later on if need be.

You will now have access to the free plan which comes with 1 GB / Month of data included and data retention is 90 days. Click on he Register button to finish the configuration:

image

Now that Application Insights is configured for your application you have access to a wealth of information with the click of a button.

sshot-372

Accessing the Application Insights Telemetry from Visual Studio

You can search your Application Insights results from either the Azure Portal or from within Visual Studio. To use Visual Studio go to the View menu, select Other Windows and then Application Insight Search. You will then get view of the telemetry for the last 24 hours as shown below from a sample API I have. From here you can filter the telemetry and dive down into specific events.

image

Another nice feature is that Application Insights telemetry data including any exceptions that have been captured will show up in the CodeLens information as shown here:

image

There is a lot of value from using Application Insights in any of your applications. I hope you take a look and try it out for yourself.

Enjoy!

References

https://azure.microsoft.com/en-us/services/application-insights/

https://docs.microsoft.com/en-us/azure/application-insights/app-insights-asp-net-core

Development

How to Disable Azure Application Insights in ASP.NET Core

In my previous post I showed you how easy it was to get started with an Introduction to Application Insights for your ASP.NET Core application. However what if you you don’t want Application Insights? You might notice in your Output pane when running your app that it’s still partially enabled for you out of the box. I’ll walk you through what I mean by it being partially enabled and then how you can go about hiding it until such time you decide to fully turn it on. Let’s get started.

Start off by creating a new ASP.NET Core application (see below) and then immediately run it.

image

You will then notice that you will see the following statements in your Output pane:

Application Insights Telemetry (unconfigured): {“name”:”Microsoft.ApplicationInsights.Dev.Message”,”time”:”2018-03-24T03:39:26.5327026Z”,”tags”:{“ai.application.ver”:”1.0.0.0″,”ai.operation.parentId”:”|80d77757-4707b4b80d71a9b3.”,”ai.internal.sdkVersion”:”aspnet5c:2.1.1″,”ai.operation.id”:”80d77757-4707b4b80d71a9b3″,”ai.internal.nodeName”:”LT2206″,”ai.location.ip”:”127.0.0.1″,”ai.cloud.roleInstance”:”LT2206″,”ai.operation.name”:”GET Values/Get”,”ai.user.id”:”6RWa2″},”data”:{“baseType”:”MessageData”,”baseData”:{“ver”:2,”message”:”Executed action WebApplication5.Controllers.ValuesController.Get (WebApplication5) in 205.1085ms”,”severityLevel”:”Information”,”properties”:{“DeveloperMode”:”true”,”{OriginalFormat}”:”Executed action {ActionName} in {ElapsedMilliseconds}ms”,”ActionName”:”WebApplication5.Controllers.ValuesController.Get (WebApplication5)”,”AspNetCoreEnvironment”:”Development”,”ElapsedMilliseconds”:”205.1085″,”CategoryName”:”Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker”}}}}

image

You might be wondering why is it doing this and how can I disable it?

The easiest way to disable Application Insights without going through the process of ripping it out is to just disable it. You can do this by accessing TelemetryConfiguration.Active.DisableTelemetry and setting this to true. What I would recommend doing is to add a static method to your Startup.cs file and call this method from your Configure method like so:

image

Now when you run your application and look in the Output pane you will no longer see any statement pertaining to Application Insights.

image

I see a great deal of value of keeping Application Insights and using it in all your applications, so if you need to disable it then maybe do this when running in debug mode by using a conditional attribute on the method.

Enjoy!

References

https://github.com/aspnet/Home/issues/2051

Development

Microsoft Adaptive Cards 1.0 is now Available

image

During the Windows Developer Day 2018 keynote, Kevin Gallo talked about Adaptive Cards and how they can be used to provide a flexible way to present your content and your data.

Adaptive Cards gives you the tools to create scale across any engagement surface.

image

Adaptive Cards was first announced at the Microsoft Build 2017 conference and has now come out of preview and is generally available as a 1.0 product. When Windows 10 Spring Creators Update is released, you will be able to use Adaptive Cards on the Windows Timeline along with other experiences like a bot, Skype, Notifications, Teams and so much more.

Enjoy!

References

Windows Developer Day 2018 Keynote

http://adaptivecards.io/

https://github.com/Microsoft/AdaptiveCards/releases/tag/v1.0

Development

Visual Studio 2017 (15.6) has new Update Experience

After updating to Visual Studio 2017 (15.6) earlier today I noticed a minor update is out (15.6.1) and you will notice a new update experience as shown here. The updated dialog shows the current version, the update version and a link for the release notes:

image

image

This update (15.6.1) only takes a couple minutes to apply.

Enjoy!