Adventures in PowerApps and Power Automate - Part 3


In part one of my Adventures in PowerApps and Power Automate blog I showcased the sign up PowerApp.
In Part 2:Adventures in PowerApps and Power Automate I showed you the Flow, or Power Automate as it is now called, that saves the records into the Azure Blob.
Now in part three I am ready to unveil the second PowerApp that will be showcased at Priority Bicycles during the National Retail Federation conference. This second app has a lot more going on. So in this blog I will just focus on the facial recognition portion, then in a future blog we will look at the other features.
This app will be prominently displayed on the new Surface Hub 2S when you arrive at Priority Bicycles. While running a PowerApp on a Surface Hub is maybe not the most practical application, it sure is cool. If you have not seen the new Surface Hub 2S, you should check it out here. ()
The initial page of this app is much more simple by design. It is similar to the sign in app in that it contains the same camera and image controls. However the camera control and image control are much larger because I don't need to take up real estate for the contact information. (After all, I theoretically already know who you are because you signed up in the sign up app at the booth or on the bus.) [Also it does not do leg or feet recognition - HA!]
The first page of the app is quite simple to look at. You click a get started button to take the picture. Then you click the Initiate Facial Recognition button to kick off the whole thing. The camera and image controls again, are the same size and overlap each other. You can learn more about these controls in the first part of this blog series.
Where the magic really happens is in the OnSelect of the Initiate Facial Recognition Button. I wish I could make the button play audio in the Start Trek/Sci-Fi voice, because that would make this even more often.
Here is a run down in real words about what the function does.
  1. Set a variable called gloResult to whatever the flow sends back. The first line was a little tricky at first. We are passing the image that was taken from the app into the flow along with a string called "PowerAppImage".
  2. Next, we are setting another variable called gloNextScreen. If the facial recognition is succesful, we want the AI Success Screen; otherwise, we want the AI Failure Screen.
  3. Next, we set another variable called gloResultContact. Here we are searching for the contact record that was passed back from the AI facial recognition based on the full name. We use this variable to welcome the person by name on the AI Success Screen.
  4. Next, we set yet another variable for the bike category for the contact. We use this variable to indicate that the person is interested in a particular category of bikes on the AI Success Screen. We also use this variable to default the filter into the bike exploration screen.
  5. The next variable that we are setting is the Full Name called gloResultContactName. This is the actual variable that we display on the success screen.
  6. The final step of this function is to navigate to appropriate screen. We do this by calling the navigate() function and passing in the variable we set in step 2.
Here is the actual function in the OnSelect button property:
Set(gloresult,'NRF-Sign-In-AI-Facial-Recognition'.Run("PowerAppImage",ImagePreview));
Set(
    gloNextScreen,
    If(
        gloresult.ismatch,
        AISuccessScreen,
        AIFailureScreen)
);
Set(
    gloResultContact,
    First(Search(Contacts, gloresult.name, "fullname"))
);
Set(
    gloResultCategory,
    gloResultContact.'Bike category preference'
);
Set(
gloResultContactName,
gloResultContact.'Full Name'
);
Navigate(
gloNextScreen,
ScreenTransition.Fade
);
The AI Success Screen is also very simple. It simply displays the persons name and the category of bikes they selected from the first app. The next button navigates to a new screen where they can start to explore the bikes. I will explore this screen and the amazing IoT magic that is happening there in my next blog.
The failure screen is also very simple. This could use a bit more design work to make an experience to recapture their data if we could not find them, but again for simplicity sake we just say sorry we could not find you. Keep in mind that facial recognition is not perfect. Although we have done some testing with different lighting, faces, beards, glasses, hats, and so on - it is by no means perfect. One thing to note is that the images you capture for facial recognition really need to be a close up with just a face. It cannot be a full body shot or have multiple people in the image. We also found that the Surface Hub 2S camera requires you to get awkwardly close to get a good enough image for the API to recognize you. So we are actually casting a Surface Pro tablet to the Hub and the image is captured with the tablet.
In the next part of the blog we will explore the Power Automate or Flow that is behind the facial recognition.

Comments