PowerApp Portal: Change your Profile Photo using The Portal WebAPI

Disclaimer: The Web API is still in preview, don’t use in production just yet! More info here.

The New Portal WebAPI opened so many doors to customize the portal outside the normal entity forms, web forms , entity lists etc.

You remember that Profile Photo container in the profile pages on the portal? Almost every client I worked with thought it is a working feature but it is not. We end up hiding this container using CSS to remove that confusion.

With the introduction of WebAPI, now we can do it with a supported way.

On the contact entity, there is a field called entityimage, this field is a byte array that holds the picture of the contact. We have two cases to worry about, on the load of the page, we need to read the image from the logged-in contact record. Also, we need to add an upload button to allow portal users to change their photos. We will end up with something like this:

Before starting, we need to setup the portal for the web api and to expose the contact entity properly.

  1. Make sure you have an entity permission setup that allows reading and updating contact. This should be the default for the logged in user since they can modify their profile data.
  2. To enable the contact entity for WebAPI, you need to create the following site settings:
    • site setting name: webapi/contact/enabled value:true
    • site setting name: webapi/contact/fields value:* (Notice I put * to have all fields but if you want to only access the entityimage field, just list the comma separated logical names of the fields you want exposed). If you have hard time knowing the field name, use Power App Portal Web API Helper plugin on XRM toolbox to enable/disable fields and entities for portal.
  3. Make sure that your portal is 9.2.6.41 or later.

There is a piece of code that you need for all of your web api calls. In the normal situation, I prefer to have this piece of code in a web template or in a web file that gets loaded on the website level. For our scenario I will include this piece of code as part of the solution so don’t worry about copying it now as it will be part of the final code. Basically, this piece of code takes your API Request, adds an authentication header and send and AJAX request to Dataverse, nothing fancy.

(function(webapi, $){
		function safeAjax(ajaxOptions) {
			var deferredAjax = $.Deferred();
	
			shell.getTokenDeferred().done(function (token) {
				// add headers for AJAX
				if (!ajaxOptions.headers) {
					$.extend(ajaxOptions, {
						headers: {
							"__RequestVerificationToken": token
						}
					}); 
				} else {
					ajaxOptions.headers["__RequestVerificationToken"] = token;
				}
				$.ajax(ajaxOptions)
					.done(function(data, textStatus, jqXHR) {
						validateLoginSession(data, textStatus, jqXHR, deferredAjax.resolve);
					}).fail(deferredAjax.reject); //AJAX
			}).fail(function () {
				deferredAjax.rejectWith(this, arguments); // on token failure pass the token AJAX and args
			});
	
			return deferredAjax.promise();	
		}
		webapi.safeAjax = safeAjax;
	})(window.webapi = window.webapi || {}, jQuery)

Moving on. As we said before, we need to tackle the page load case and the upload button case.

For the page load, we need to query the entityimage using FetchXML, the following script retrieves the field and assign its value to entityImageBytes liquid variable.

 
 <!--Get the user profile photo (entityimage)-->
 {% fetchxml contactImageQuery %}
     <fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
        <entity name="contact" >
            <attribute name="entityimage" />
            <attribute name="contactid" />
            <filter type="and">
            <condition attribute="contactid" operator="eq" uitype="contact" value="{{ user.id }}" />
            </filter>
        </entity>
    </fetch>
{% endfetchxml %}

{% assign entityImageBytes = contactImageQuery.results.entities[0].entityimage %}

Next, we need to convert these bytes to a format that the Html “img” understands, in this case, we need to convert it to a base64 string. In the below two java script function, we do exactly that. Thanks to Franco Musso’s blog, where I borrowed part of the logic to do the format conversion that saved me some googling time 🙂

The loadProfileImage function, finds the image tag that resides on the profile page. I had to do some other styling for the div around the image but it is up to you. Then, that byte array we got through FetchXML is converted to a long comma separated string that gets fed to bytesStringToBase64 function that encodes that byte sequence into a base64 format. Once we get that base64 string, we assign it as the “src” attribute of the “img” tag. At that moment, you should have the profile picture populated with an image if it exists.

function loadProfileImage() {
    // Select the profile photo and its container, style them as needed
    var profileImgWell = $($(".well")).css("padding", "1px");
    var profileImg = $($(".well a img"))[0];
    $(profileImg).css("width", "100%");
    $(profileImg).css("height", "100%");

    // get the bytes returned by the fetxhxml query and join them to form a single string
    var bytesData = "{{entityImageBytes| | join: ','}}";

    // convert the byte array to base64 string and assign it to the src attribute of the profile image
    var base64String = bytesData ? bytesStringToBase64(bytesData) : null;
    if (base64String) {
        profileImg.src = 'data:image/jpeg;base64,' + base64String;
    }

}

function bytesStringToBase64(bytesString) {
    if (!bytesString) return null;
    var uarr = new Uint8Array(bytesString.split(',').map(function(x) {
        return parseInt(x);
    }));
    return btoa(String.fromCharCode.apply(null, uarr));
}

Next, we need to add the ability of uploading a photo by the logged-in user. To do that, we will add some HTML to add the upload button and the file selector. On the file input, you notice that we call a function (convertImageFileToBase64String) that converts the file into a base64 string, why is that? because when we update the image file in Dataverse, the JSON payload of the UPDATE request needs the image to be in base64 format.

<div style="display:inline-block;">
    <label>Update your profile photo </label>
    <input type="file" name="file" id="file" onchange="convertImageFileToBase64String(this)"/>
    <div id="uploadPhotoDiv"  style="visibility:hidden">
    <button type="button" onclick="onUploadClicked()" id="uploadPhoto">Upload</button>
    </div>
<div>

The convertImageFileToBase64String function basically reads the file and convert it to a base64 string using the FileReader.readAsDataURL method, the result is stored in image_file_base64 variable, we need that later when we do the UPDATE request. You can be more professional than me and avoid the public variable and make the function return a promise instead of this sketchy method 🙂

var image_file_base64;
function convertImageFileToBase64String(element) {
    var file = element.files[0];
    var reader = new FileReader();
    reader.onloadend = function() {
        image_file_base64 = reader.result;
        image_file_base64 = image_file_base64.substring(image_file_base64.indexOf(',') + 1);
    }
    reader.readAsDataURL(file);
}

We only have on thing left, the upload button. This button should take the base64 image that we just converted when we uploaded the image and send it to Dataverse using an UPDATE request. the image_file_base64 is already in the format we want, the only thing we need to do is to call our AJAX Wrapper we mentioned in the beginning and provide it with AJAX request options as shown below. Notice that on success of the call, I optimistically update the image without the need to refresh the page.

function onUploadClicked(){
        $("#uploadPhoto").text("Uploading...");
        webapi.safeAjax({
            type: "PATCH",
            url: "/_api/contacts({{user.id}})",
            contentType: "application/json",
            data: JSON.stringify({
                "entityimage": image_file_base64
            }),
            success: function(res) {
                $("img").first().attr('src', 'data:image/png;base64,' + image_file_base64);
                $("#uploadPhotoDiv").css("visibility","hidden");
                $("#uploadPhoto").text("Upload");
               }
        });
}

Now to the full code and how to use in a two steps in your portal.

 
 <!--Get the user profile photo (entityimage)-->
 {% fetchxml contactImageQuery %}
     <fetch version="1.0" output-format="xml-platform" mapping="logical" distinct="false">
        <entity name="contact" >
            <attribute name="entityimage" />
            <attribute name="contactid" />
            <filter type="and">
            <condition attribute="contactid" operator="eq" uitype="contact" value="{{ user.id }}" />
            </filter>
        </entity>
    </fetch>
{% endfetchxml %}

{% assign entityImageBytes = contactImageQuery.results.entities[0].entityimage %}
<div style="display:inline-block;">
    <label>Update your profile photo </label>
    <input type="file" name="file" id="file" onchange="convertImageFileToBase64String(this)"/>
    <div id="uploadPhotoDiv"  style="visibility:hidden">
    <button type="button" onclick="onUploadClicked()" id="uploadPhoto">Upload</button>
    </div>
<div>
<script>
$(document).ready(function() {
    // Load the profile photo from dataverse on document ready
    loadProfileImage();
});

// ajax wrapper provided by Microsoft.
(function(webapi, $){
        function safeAjax(ajaxOptions) {
            var deferredAjax = $.Deferred();
     
            shell.getTokenDeferred().done(function (token) {
                // add headers for AJAX
                if (!ajaxOptions.headers) {
                    $.extend(ajaxOptions, {
                        headers: {
                            "__RequestVerificationToken": token
                        }
                    }); 
                } else {
                    ajaxOptions.headers["__RequestVerificationToken"] = token;
                }
                $.ajax(ajaxOptions)
                    .done(function(data, textStatus, jqXHR) {
                        validateLoginSession(data, textStatus, jqXHR, deferredAjax.resolve);
                    }).fail(deferredAjax.reject); //AJAX
            }).fail(function () {
                deferredAjax.rejectWith(this, arguments); // on token failure pass the token AJAX and args
            });
     
            return deferredAjax.promise();  
        }
        webapi.safeAjax = safeAjax;
    })(window.webapi = window.webapi || {}, jQuery)

function onUploadClicked(){
        $("#uploadPhoto").text("Uploading...");
        webapi.safeAjax({
            type: "PATCH",
            url: "/_api/contacts({{user.id}})",
            contentType: "application/json",
            data: JSON.stringify({
                "entityimage": image_file_base64
            }),
            success: function(res) {
                $("img").first().attr('src', 'data:image/png;base64,' + image_file_base64);
                $("#uploadPhotoDiv").css("visibility","hidden");
                $("#uploadPhoto").text("Upload");
                
            }
        });
}
function loadProfileImage() {
    // Select the profile photo and its container, style them as needed
    var profileImgWell = $($(".well")).css("padding", "1px");
    var profileImg = $($(".well a img"))[0];
    $(profileImg).css("width", "100%");
    $(profileImg).css("height", "100%");

    // get the bytes returned by the fetxhxml query and join them to form a single string
    var bytesData = "{{entityImageBytes| | join: ','}}";

    // convert the byte array to base64 string and assign it to the src attribute of the profile image
    var base64String = bytesData ? bytesStringToBase64(bytesData) : null;
    if (base64String) {
        profileImg.src = 'data:image/jpeg;base64,' + base64String;
    }

}

function bytesStringToBase64(bytesString) {
    if (!bytesString) return null;
    var uarr = new Uint8Array(bytesString.split(',').map(function(x) {
        return parseInt(x);
    }));
    return btoa(String.fromCharCode.apply(null, uarr));
}


var image_file_base64;

function convertImageFileToBase64String(element) {

    var file = element.files[0];
    var reader = new FileReader();
    reader.onloadend = function() {
        image_file_base64 = reader.result;
        image_file_base64 = image_file_base64.substring(image_file_base64.indexOf(',') + 1);
        $("#uploadPhotoDiv").css("visibility","visible");
    }
    reader.readAsDataURL(file);
}

</script>

Let’s assume you pasted the whole code in a web template called “Contact Photo Uploader”, open your profile page or child page on the portal as an administrator, click this little edit icon on the page copy

Click on the HTML source button in the dialog that appears

and paste this line {%include “Contact Photo Uploader”%} and save the html snippet.

That’s it, this will include the whole web template we just built on the page and the result should be something like this without my photo of course :). Note that the upload button will only appear once you select the file.

Syncing Contact Attributes to B2C Users for PowerApp Portals with Cloud Flows

Note: This article is goes through the process from a Logic app perspective, but in my case I needed to do it in Flow, thanks to the original author.

If you ever used the Power App Portals with the most common authentication mechanism (Azure AD B2C) then you may have come across the problem of portal users frequently changing their email address or contact information or less frequently changing their names from within the portal or by changing these attributes directly in the model driven app by backend users.

The problem with Azure AD B2C user directory is that it doesn’t respect those changes, meaning that those changes don’t reflect back from Dataverse into B2C. The common solution has been custom code. We write a plugin that calls the Graph API to update Azure AD B2C directory. I will follow a similar approach but using a Cloud Flow instead of the plugin, this of course will make any future changes easier and we can add some cool things like retry logic if the request fails or email notification to the administrator etc.

Assumptions:

  1. I assume that you have a portal ready and using Azure AD B2C as its authentication mechanism. If you don’t, check out this very new feature on how to set that up using a wizard.
  2. I also assume that you have administrative privileges to add and configure an application registration in the B2C tenant.

The Solution:

We will build a daemon flow that runs on the update of the contact entity (portal user is a contact). This Flow will execute an HTTP request to the Graph API to update that user record in B2C with the new values from Dataverse. Before building the flow, we need a way to authenticate the flow, best way to do that, is to create an app registration that takes care of the authentication and use that app registrations in our flow.

Part 1: Creating the App Registration

Since we don’t have handy connector for the Graph API that does what we want, we need to issue HTTP requests to the API. To issue these HTTP requests, we need to authenticate with the Graph API. Instead of hardcoding a username and password inside the flow, we will use the Application Registration approach.

Navigate to portal.azure.com and open the B2C directory that you use to authenticate the portal. (It is important that you are in the B2C directory and not your default one). Make not of the B2C Directory name as we will need it later. Search for the Azure AD B2C directory and open it.

  1. Under App Registrations, click New registration.

2. Give your app a name and make sure to select the last option as we are already using user flows to authenticate portal users who mostly will be from outside our organization. Click Register when done.

3. After creating the Application, take note of the Client ID and Tenant ID that appear in the application registration overview page, we will need them later.

4. We need to create a secret for this app as the flow needs it for the HTTP request authorization header. Click on Certificates & Secrets from the left pane, under the Client secrets, click New client secret. Give the secret a name and an expiry date and click Add. When the secret is added, you have one chance to copy its value as it will be hidden from you if you navigate from this page. So, for now, we have the B2C Directory name, Client ID, Tenant ID, and the Secret values saved on the side.

5. This app is almost ready, we just need to fatten its permissions to make it capable of managing some aspects of the B2C tenant that will allow us to solve our problem. Click on the API permissions that you see in the left pane of the previous image and then click on Add a permission.

From the dialog on the right, choose Application permissions because our Flow will be a background process and choose the following permissions ( you can search by permission name):

User.ManageIdentieis.All, User.ReadWrite.All, Directory.Read.All

When you add these permission, make sure to click on “Grant admin consent for YOUR-TENANT_NAME”. Now we are already, let’s move to the Flow side.

Part 2: Creating the Flow

The flow is really simple, use the current environment trigger and configure it to run on update and scope it to organization. All these variables will hold the values we collected previously, you should have everything handy by now except for Auth Audience URI with should be https://graph.microsoft.com. Since we only want this flow to work for contacts who are actually portal users, I’m checking the User Name field on the contact entity. In Dataverse, the contact record has a field called User Name which hold a GUID that represents its Azure AD B2C ID.

Now, for the HTTP action, make sure that the method is PATCH since we will be only updating pieces of the user record. For the URI, you need to use https://graph.microsoft.com/v1.0/users/{Contact Record User Name}.

For the Headers, we need the Authorization header and the Content-Type header. The Authorization is basically the secret we got from the previous steps. The Content-Type is hard-coded as application/json.

We are not done yet with authentication, if you click on advanced settings at the bottom, you need to configure the authentication as shown below using the same variables we collected before.

Now the last part, the body of the PATCH request. In my case, I’m interested in the Email (mail) , First Name (givenname) and last name (surName) because those are the only fields I mapped when I configured the B2C authentication for my portal. This means that if the contact changes their name or email, the corresponding B2C user will get those changes in few seconds as well. For more info on the schema of the JSON you see above in the body field and if you are interested in updating other fields, refer to this documentation.

That’s it! We are now good to go, go and an update a portal user email, first name or last name in the portal profile page or directly in the Model-driven app. In few moments, the changes should reflect in the B2C directly. Here is a very short demo of how this works:

Below is an example video of how this works:

Power Portal Web API Helper – An XrmToolBox Plugin

Web API for Portals is a new feature where you can finally issue Web API calls from your portal to the CDS. This post doesn’t go in the details of this feature, everything you want to know can be found here. The summary is that when you want to enable an entity for web api on the portal, you need to configure few site settings and some entity permissions and then call the Web API’s using some JavaScript code. This plugin is super simple and it just helps you in enabling/disabling the entity and any attribute for that entity for the WebAPI feature on the Portal. In addition to that, it provides you with simple JavaScript snippets that you can use as a starting point in your project.

How to use this plugin?

  1. Install the plugin from the Tool Library in XrmToolBox.
  2. Open the plugin and connect to your organization. The plugin should look like this

3. Click on Load Entities button from the command bar and the system and custom entities will be listed for you. You can search for your entity name in the filter box. You can also notice that a website selection appears in the command bar, you need to select the target website in case you have more than one in your target organization. Enabling an entity on Website A will only create the proper site settings for Website A, if you change the website, you need to enable the entity again as new site settings need to be created.

Note: Microsoft Docs clearly says that this feature should be used with data entities such as contact, account, case and custom entities. This feature shouldn’t be used on configuration entities such as adx_website or other system entities that store configuration data. Unfortunately and to my knowledge, I couldn’t find a very deterministic way for filtering every entity that shouldn’t be used with the Web API so if you know of a way, please do tell so that I can fix the filtering logic. The current logic is to exclude the list provided by Microsoft which consists mostly of the adx_ entities. I also did some judgment calls on other entities that store configuration data and not business data.

4. Once you select an entity, you can either enable or disable it and you also can select the fields you want to expose. When you are done, just click Save changes from the command bar and all the site settings will be created/updated for you.

5. If you are not very involved in JavaScript, and to help you get started quickly, clicking on Generate Snippets in the command bar will provide you with Create/Update/Delete snippets in addition to the wrapper ajax function that you use to issue the web api calls.

6. As an additional small feature, the JSON object you use for Create and Update snippets will have the selected attributes with default values. If you don’t see some of these attributes in your output JSON string, it is mostly because the attribute is not available for Create or Update as the plugin does the check for those conditions. For example, if you are creating an account record, you can provide the accountid in the create call but not in the update call as the accountid is not updatable.

That’s it, a simple plugin that will save you sometime when looking for those attributes logical names and when you want to enable entities for portal web api feature.

Project Link on GitHub: https://github.com/ZaarourOmar/PowerPortalWebAPIHelper/issues

Azure API Management and Dynamics 365 Web API

When you have SaaS systems and custom systems all over the place in your organization, there is a need for unification. The more systems you have over a long period of time, the less standardization you have among them. Other factors include different technologies and different architectural styles. If those system expose any kind of API that is needed internally (by your developers) or externally (by your customers), then a need to give those API’s a consistent look and feel along with a set of unified policies becomes important.

Azure API Management (APIM) is one solution to this problem. APIM is an Azure resource that you can provision and have it sit between your API consumers and the APIs exposed by your systems.

Azure API Management

My focus here is on one example: D365 Web API. First question that comes to mind is why do we need to expose D365 APIs through APIM even though the API is modern and well documented? Here are few reasons:

  1. The Common Data Model have API limits. If you have systems that read data from your Dynamics 365 instance through APIM, you can cache the results and save on some API calls. APIM comes equipped with a built in cache but if you need a bigger cache then you can attach an external cache system like Redis.
  2. Because of the imposed CDS per user limits, with APIM you can limit the number of API calls by a your consumer systems. As of this time, Microsoft provides a 25,000, 50,000 or 100,000 API calls per application or non-interactive user based on your licencing model (see details here).
  3. With APIM, you can set the authentication to your D365 Web API using an Azure AD application instead of a licensed user. Anyone who calls the API doesn’t need to have a user setup in D365 or given a security role. This basically ease things for user management. This ties back to number 2 above, the API calls here don’t count as part of a user quota because we are not using a licensed user authentication. The type of user we will use is called Application User, and this type doesn’t need a licence.
  4. If the systems that talk to D365 Web API expect an XML format result, you can transform the JSON output of D365 API into an XML output. Other transformations are also supported by APIM.
  5. APIM provides so many other policies that you can put in place between your consumers and the D365 API, to mention a few, you can change headers, add more data to the request/response etc.
  6. D365 API is extensive and it is time consuming for the its consumers to learn about it quickly. In APIM, you can have end points to what is needed only by your API Consumers.
  7. Analytics on who is calling your API’s and how many requests for each end point.
  8. You can package your APIs endpoints in products (group of API that serve some defined purpose), and you can provide your consumers with subscription keys to track who is calling what API endpoint.

To this point, we haven’t done any real work. In summary, here is what we want to do:

  1. Provision an APIM instance.
  2. Create a simple API in APIM that calls our D365 Web API.
  3. Setup the Authentication between APIM and D365 Web API using Azure AD and without consuming a Dynamics licence.
  4. Add a send-request policy for token generation to implicitly obtain a token and send it to D365 API.
  5. Call the APIs from APIM.

Provisioning an APIM Instance in Azure

Azure APIM comes in different pricing tiers flavors. In this blog, I opted out for the developer plan, the steps below should work on all other tiers including the consumption plan. Head to Create a new Resource in Azure, search for API Management and create it as below. The name needs to be globally unique. With the developer tier, expect a wait time of 30 minutes at least for this resource to provision, if you want a much faster provisioning, select the consumption plan. Once you provision the APIM instance, it will be accessible https://{Name of your APIM resource}.developer.azure-api.net/, this URL will server as the base URL for all the API’s the are behind this APIM resource.

Create A simple APIM API that Calls D365 API

We can create API’s in APIM in different ways. If your API has an open api file, previously called Swagger, then you can import that file and APIM will create the operations for you. Unfortunately, with Dynamics API we don’t have that luxury and we need to create the API manually starting from an empty API.

In your provisioned instance, click APIs on the left navigation, and click on Blank API.

In the dialog that appears, fill in a friendly name for your API and add the base URL for Dynamics API. If you have many APIs that you want to hide behind an APIM resource, give each API a prefix, in my case, I call it d365api. So now, calling https://{your crm org}.crm3.dynamics.com/api/data/v9.1/ is equivalent to calling your APIM instance at the URL https://apimd365.azure-api.net/d365api

Once an API is created, we need to add the operations to our API. Assume that your ESRI maps team at your organization wants to draw all of your contacts and accounts on a map and they need their data. You can easily expose two operations like this:

https://apimd365.azure-api.net/d365api/contacts

https://apimd365.azure-api.net/d365api/accounts

The way to do this is by Adding an operation as shown below, you need to specify the path of the operations. In Dynamics 365 and because it is a REST API, you can just have all the accounts by appending “contacts” to the base URL, same for accounts. Of course, if you want to only return specific data about those contacts or accounts, you can pass the OData filters in the query part of the operation being created.

At this point, we have a fully non-functional API :). The reason is Dynamics 365 still doesn’t know about APIM and doesn’t trust any request that comes from it. Next, let’s see how we can implement a server to server authentication without any intervention from the API caller.

Set the Authentication between APIM and D365

Note: Application and Client are used interchangeably in Azure AD terminology

APIM is very flexible from the API authentication/authorization point of view. It allows you to add an OAuth2.0 or OpenID connect authentication servers as part of its configurations to get access tokens that you can later supply to your API calls . As you may know, D365 API uses OAuth2.0 to Authenticate, this happens by calling an authorization end point that provides you with an access token, then you pass this token in a header called “Authorization” to any call you make to D365 API. Here we have two options, the first is letting the APIM consumers be responsible for calling the authorization endpoint, getting the token and provide as a header or making their life easy and abstract the authorization from them completely. I prefer the latter option because it is less headache for end users of your APIM.

In the case we are tackling now, I have my Dynamics 365 and my APIM in the same tenant, that means they both operate under the same Azure Active Directory. To let the APIM authenticate with Dynamics API, we need to create an Application Registration in Azure AD, give it permission to Access Dynamics 365 and let Dynamics 365 organization know about it. This App Registration is like a user identity that APIM uses to authenticate with D365 API.

To create an Application Registration in Azure AD, click on App Registration, New Registration.

Give the Application Registration a friendly name and select Web as its type. In the redirect URL, fill in any value as this is not important, for example, use https://localhost

Take a note of the application ID and the tenant ID as we will need them later.

To complete the credentials, we need to generate a secret that will behave like the password for the app registration. In a production environment, try to have this secret stored in the key vault, but for now, just create a secret, set its expiry and copy its value as well (This is your only chance to copy the secret, after that it will be masked forever, If you lose the secret, you need to generate a new one).

Our App Registration is created, but two things are missing, the permission to access Dynamics APIs from this app registration and letting the Dynamics system know about this app registration. To give the app registration a permission, Click on API permissions, add Dynamics CRM

Select the User_Impersonation permission and click Add Permission.

To make Dynamics 365 aware of this app registration, you need to create what is called an Application User (you need to be a Dynamics administrator to do this). The Application user is the representative of Dynamics 365 that talks with the App Registration. Navigate to Settings->Security->Users. Switch to the Application User View and click New User. You only need the Application ID of the App Registration that you collected earlier. Fill in the other required fields and save, if everything is successful, the Application ID URL and the Object ID should auto populate. Give this user a security role to access only the needed data by your API consumers and nothing more. For our example to work, at least give a read access on the account and contact entities.

The last piece of information we need from the Azure AD is the token generation URL, this URL is what the APIM will call to get a token that will be used to authorize a request to D365 API. In the overview section of either Azure AD or your app Registration, Click on Endpoints in the command bar and select the token URL

By now, you should have the application ID, tenant ID, secret value and the token generation URL.

Add a send-request Policy For Token Generation

To make things easy for the APIM consumers, we need to implicitly authenticate with D365 API using the App Registration and Azure AD. One of the most amazing features of APIM is the ability to do almost anything in between receiving a call and before we forward it to the back-end service (D365 API). In our case, we need to issue a call to the Token generation URL with the proper App Registration credentials, get a token and insert it into a header called “Authorization”.

Click on APIs, select the API we created above and click on all operations. and in the Inbound Processing designer, click on the little edit icon. The reason we want to do this on all operations is because D365 requires a token on any operation, so here we will do it once and it will apply to each individual operation after that.

By default, your inbound processing looks like this, this means that do nothing on inbound calls, nothing in the back-end and nothing with the outbound results. In our case, we want to add a request to the inbound stage to get us a token.

Policies in APIM are so flexible and you can do so many things with them to manipulate the call pipeline. In our case, we want to to send a request to the token URL and thats done by a send-request policy. You will need most of the values you collected before in the App Registration step. Notice that the send-request policy stores the result in a variable called bearerToken and just after the send-request policy, we use a set-header policy that create a header called “Authorization” and populate it with the value of the access token. (Note: The send-request returns a bearerToken object that among other things, contains the access token that needs to be in the authorization header, that explains the casting logic you see in the set-header policy).

  <inbound>
        <base />
        <send-request mode="new" response-variable-name="bearerToken" timeout="20" ignore-error="true">
            <set-url>https://login.microsoftonline.com/{your tenant guid here}/oauth2/token</set-url>
            <set-method>POST</set-method>
            <set-header name="Content-Type" exists-action="override">
                <value>application/x-www-form-urlencoded</value>
            </set-header>
            <set-body>@{
              return "client_id={your client secret guid}&resource=https://{your org name}.crm3.dynamics.com/&client_secret={your secret here}&grant_type=client_credentials";

             }</set-body>
        </send-request>
        <set-header name="Authorization" exists-action="override">
            <value>@("Bearer " + (String)((IResponse)context.Variables["bearerToken"]).Body.As<JObject>()["access_token"])</value>
        </set-header>
    </inbound>

Call The API from APIM

Now is when the hard work “should” pay off. Open a browser tab (since we are doing a GET request, a browser should be enough, otherwise, use Postman), paste this URL

https://apimd365.azure-api.net/d365api/contacts

If you see the list of contacts returned to you, then you have done everything right, if you don’t, check the returned error, most of the time, it is security related and a review of all the values in the app registration and the send-request policy is a good start for debugging.

Subscription Keys and Products

APIM comes equipped with two powerful features, Products and Subscriptions. Products is a way to group specific APIs together to manage them as a unit. Subscriptions are basically a method of tracking API consumers by asking them to provide a subscription key with each request. You can control the API limits and analyse them by subscription keys, rate limiting and throttling policies are very common policies in APIM (more on that here). This adds a simple layer of security and tracking capability in your APIM, for example our call to get contacts will be something like this

https://apimd365.azure-api.net/d365api/contacts &subscription-key={some key}

Each subscription has a primary and secondary key that you can provide to your APIM consumers and if they are compromised, you can regenerate them again. Also, subscription keys can be passed as a header if you don’t want to expose them in the URL.

Summary

We have created an APIM resource between your D365 API callers and the API itself. APIM provides a lot of control over what happens to the request over its lifetime. We showed the important of authenticating/authorization using an app registration and an application user and how this way saves us from consuming the licensed users CDS API limits. We also saw how we can incorporate policies to to politely hijack the request and issue a call to the token endpoint and set the authorization header with its response. This post is in no way an extensive post about APIM, to know more about its wide feature set, visit the official documentations here.

Embed Power BI Visuals in Power App Portals for External Customers

Exposing a report or a dashboard tile from Power BI on a page in your Power App Portal is a supported feature. Not only, you will get the read-only visuals on your page, but you will also get a good amount of power like Natural Language Q/A, exporting data, drilling , filtering, slicing etc.

PowerApp Portal now has a liquid tag called powerbi, with this simple tag we can embed reports and dashboards in a single line of liquid any where on the portal. To get the powerbi liquid tag working, you need to enable Power BI Visualization from the portal admin center (you need to be a global admin).

The powerbi tag accepts the authentication_type, a path parameters for a report or a dashboard, a roles parameter for Row level security roles and a tileid for a specific dashboard tile if needed. Previously, the authentication type allowed only AAD (Azure Active Directory) and Anonymous values. In the AAD case, only users in the company AAD who have the reports shared with them, will have access to view the report (A sign in may be required to load the report), this is a good case for the Employee portal but hardly a good case for any other portal type or if your customers are not in your AAD. With the Anonymous case, you need to make your report fully public in Power BI service which means no authentication what so ever which can be a no-no for some organizations. In the October 2019 release, a new authentication type called powerbiembedded became available and with this authentication type, we can easily support viewing a report from Power BI on the portal by letting the Power App portal itself authenticate with the Power BI Service on the user’s behalf.

{% powerbi authentication_type:”” path:”” roles:”” tileid”” %}

The very first step to enable this mode of authentication is by going to the portal admin center and enable Power BI Embedded Service.

You will be asked to select the work space(s) that includes the reports you wish to expose. Move the workspace to the Selected work spaces and click Enable. This may trigger a portal restart that may take few minutes.

Up until now, the Power BI service and the Power App Portal don’t trust each other. To enable the Portal to authenticate with the Power BI Service, we need to make AAD aware of that. We need to create a security group in AAD and add the portal application to it as a member. This will allow us to tell Power BI Service that anyone in this group is allowed to see what’s inside workspace. To do this, login into https://portal.azure.com and open AAD.

Select Groups

Select New Group

Fill in the group details and click Create

Open the recently created group and click on Members, Add Members.

Search for the portal application and add it as a member. To save time, you can paste the Application ID found under the Portal Admin center in the new member search box and it should filter that portal application only.

Now, we need to tell Power BI that this security group is trusted. To do this, sign in into https://powerbi.microsoft.com and from the gear icon, select Admin Portal.

Under Tenant settings->Developer Settings, add the group you created in the previous step. Any member in this group will have access to Power BI APIs. Click Apply when done.

Our work is done here, what we need now is to expose a visual on the portal. Create a web page with a custom page template. Let the page template point to a custom web template. To show a working example, I created a sample web template called PowerBI and it has the powerbi liquid tag.

Notice that the authentication_type is now powerbiembeded and the path points to a report in my Power BI Service. Note that roles parameter here, I assigned it to some dummy role I created in my report (see https://docs.microsoft.com/en-us/power-bi/service-admin-rls) as It seems like at least one role has to be passed in the liquid tag for the visualization to work. A really good use case of the roles parameter is to have the roles in your Power BI report match the name of your web roles on the Power Portal, then you can control who can see what data based on your web role.

The final result? a report on the Power Portal with the portal taking care of authentication with Power BI Service.

Canvas App Performance Quick Wins

Canvas Apps are relatively easy to implement given that you become familiar with their overall structure and their light -but many- formulas. What you don’t want is make your boss loves them but then hate them when they are too slow when applied in production. Let’s go over some very simple ways to make your current and next Canvas App faster and more responsive and most importantly, make your users happy. Keep in mind that performance is a huge subject and this is just the tip of the iceberg.

Caching

If you have a region list in your CDS and you want to populate this region list in a drop down on few screens in your app. You may use the Filter formula like this:
Filter(RegionList, Status=”Active”)

and set this as the data source for your drop down. This is all good until you need this drop down on many screen which will cause multiple network calls to CDS. This accounts for longer time and more requests to CDS which are now accounted for (we have a limit).

Quick fix for this is to have all of your data that doesn’t change much (like regions, departments, etc) be loaded once and stored in global variables on the start of the app. For regions, you can write something like this in the OnStart of the app:

Collect(collectRegionList, Filter(RegionList, Status = “Active”))

Now, set your drop downs to the collectRegionList variable and this way no multiple loading of data happens.

Concurrency

If you need to load regions, departments, account types and other data at the start of the app, you can write something like this in the app OnStart:

Collect(collectRegionList, Filter(RegionList, Status = “Active”)) ; Collect(collectDepartmentList, Filter(DepartmentList, Status = “Active”)) ; Collect(collectAccountTypesList, Filter(AccountTypesList, Status = “Active”))

This will cache your data which is good, but the load time of your app will be long and the app will be loading for sometime. Say, each line takes 2 seconds to complete, you end up waiting 6 seconds for the 3 statements for execute. A better way is to use the super easy Canvas App concurrency capability which will a bit more than 2 seconds to execute.

Concurrent(
Collect(collectRegionList, Filter(RegionList, Status = “Active”)) ; Collect(collectDepartmentList, Filter(DepartmentList, Status = “Active”)) ; Collect(collectAccountTypesList, Filter(AccountTypesList, Status = “Active”)) ;
)

Delegation

This is an important concept. When a canvas app executes a function on a data source, who does the processing of the records? It depends on the function and the data source. For example, if you need to use the Filter function with a Share Point data source, and you ask for all records that are active in a set of 10000 records, Canvas App send this condition to Share Point and Share Point returns only those records that are active. Basically, the Canvas App doesn’t loop through all the records to see which one is active and which one is not. If your users are on a mobile network, this makes a huge difference in performance. In this case, we call the Filter function delegable with Share Point data sources.

On the other hand, other function are not delegable, take Search for example and the same Share Point data source, when you search for something in Share Point, a canvas app will receive all the records from Share Point and search each record locally. Here we say that the Search function is not delegable with Share Point data sources.

So when you choose which function to use, also consider which data source your are using as well, as both of these will determine how delegable is your functions. Below is a summary of the common functions and popular data sources with information about their delegability.

https://docs.microsoft.com/en-us/learn/modules/work-with-data-source-limits-powerapps-canvas-app/2-functions-predicates-data-source

Refresh? No thanks.

I used to do this a lot with Windows Millennium (long time ago) . While Windows is dying doing something for me, I was hitting refresh all the time on my desktop, I think I was helping it die faster. Fast forward to now, don’t use refresh unless you absolutely need it. In many cases, Canvas Apps will do the refresh for you, if that’s the case, don’t double the work by calling the Refresh function again.

Use Power Platform with On-Premise SQL Databases

Power platform is all built around data. Luckily, this data can reside anywhere, thanks to the Connections feature provided by the platform. If you want your Apps to interact with data from a local database hosted on a server somewhere then this is how you do it. This has a lot of potential from master data management to compliance to regulations and many more. I will go through a simple example starting from creating the database then to building a Canvas App around the data.

Step 1: Create the database and a table (if you don’t have one already). In your server, create a database or use an existing one. In this case, I created a DB called Master Accounts.

To simulate a CRUD operation later, I created a table called Accounts in this database using the following script. Make sure to specify a Primary Key or Canvas App will make your app read only without the ability to add/delete records.

Step 2: Create a user that can access this database. This is achieved by creating a “login” in the security database in your server and assigning this login the created database.

Step 3: To give the Power Platform the ability to interact with you local database, you need to install the On-Premises Data gateway. Notice that this gateway can be configured to work with all the Power Platform apps or only Power BI, of course we need the former option. After you are done, the gateway interface will look like the following image.

Step 4: Sign in with you Power Platform Admin Account:

When done, this is what you should get:

Step 5: Connect to the database from Power Platform. Visit make.powerapps.com and on the left pane, select Data and Connections. Select the New Connection from the command bar.

In the new Connection wizard, select the type of connection to be SQL Server and you should see a window similar to the following:

Of course, you can authenticate in different way, but I choose the SQL Server Authentication mechanism. Plug the values we created in the first two steps. Make sure that the correct gateway is selected.

If all is well, you should see a new connection in the list, mine looks like this:

Now let’s go and create a canvas app from data and see the magic!

Step 6: Create a Canvas App starting from SQL Server Data.

Select the “Accounts” table we created before and hit Connect:

And now you should end up with a Canvas app the can perform CRUD Operations on the local database. It will require some redesign though:)

The nice thing is that this connection is a Power Platform Connection now so you can use it with Flow!

This capability opens a lot of doors for the organizations that are hesitant to move their data up in the clouds, so go experiment with it and use it!

Power Platform and Change Management

Let’s face it, switching users from using their Excel sheets or Access databases toward using one monolithic Dynamics 365 application can be a hard change management process if you have so many users to convince. Sometimes,even the upper management can’t force that change depending on what type of organization it is.

With the new Power platform capabilities , the change management seems to be getting easier and easier because now we have options that we didn’t have before (or we did have but the are currently improved). Once the organization decides that this is the platform to go with, then here are some options that will make it easier to convince the user base to switch.

The simple approach that can be used right away is using the model-driven apps capability of dividing your applications into verticals. If you have one huge application with so many entities, then create multiple apps that are used by different business units or group of users. Each business unit or group should only see what they need to see and in this way, the probability users getting lost in the application is reduced and the amount of training needed for the users is reduced. This also means that error rate will be reduced as well because their options are more limited to what they need only.

With Model-driven apps, and in addition to limiting what entities a user can see, you can also limit what forms, views, charts, dashboards and business process flows. So when you have an entity (like the Case) that is used by multiple groups then each group can see their own forms and views and charts without being overwhelmed with everything else. I won’t call this a security layer but a way of organizing components.

Image result for model driven apps"

If model-driven apps are not enough, then the Canvas Apps are to the rescue. Canvas Apps are new and their concept is new. Unlike model-driven app that seem intuitive to someone who knows the previous versions of Dynamics, Canvas App require a shift in the design mentality. Now we are not talking about a single application that can do many things, but about an application and many other little helper separate applications around it that all feed the same data layer (Common Data Model). So when you create data using a Canvas App, it is possible to view it from Dynamics and vice versa.

The introduction of Canvas Apps adds a new question during to the design process: “Should we implement this module in Dynamics or using a Canvas App?“. This question is becoming an important one because it doesn’t only affect the application architecture but also the user on-boarding experience, training time, error rate and user confidence.

Canvas apps are great when there is a user or group of users who do a limited set of functionalities that can be separated away. Take an example of a service call center agent who just answers the calls, log a ticket and try to solve it or escalate it. You don’t need to train this agent on the whole almighty Dynamics for customer service but only on a screen or two of the Canvas App that she and her team has access to. Keep in mind that Canvas Apps can have more complicated use cases.

So to make the change management process easier, you don’t need to take the users away from their Excel sheet into an application that is a 100 times the size of their Excel sheet but to an application that is almost the same size as their Excel sheet. Success is almost guaranteed in this case.

Using the Calendar Control View in the Unified Interface

Often, we get asked to show records in a calendar view. I personally used the JavaScript-based Full Calendar many times in the past to do that. If your requirement is just showing the records on a calendar with basic functionality then the Calendar control in the unified interface might be your answer.

In the classic interface, we used to have a calendar control on the entity that only works in the Phone and Tablet Layouts. This control basically allows us to view the records on a calendar instead of just showing them in a list.

Moving to the unified interface, the “Web” option is now available. To test that, I created a dummy event entity with Start date, End date and Description fields.

A custom Entity with Start date, end date and description fields.

Then from the controls section on the new entity (use the classical interface designer as this is not available yet on the new designer), add a calendar view, enable it for web and bind the start,end and description fields to the fields we just created above. Note that the description field will show on the calendar, you either can bind it to the name of the record or a custom description field if you want to show more information. Save and Publish your changes.

Add the calendar control and bind the values

Now when you go to view the events, instead of the classical view, you will see a nice calendar view.

The calendar control shows instead of the classical view.

If you like to go back to the normal View list, you can do that from the top right corner.

Business Rules for PowerApps Portals – v1

When it comes to customizing Dynamics 365, I don’t care how we do it, I care about enabling the customers to use the system easily after it gets delivered to them. This of course means if we can get things done by OOB configuration and customization wizards, then it is the way to go, the last option is to write code. One example is the use of Business Rules instead of client side scripting, for simple to medium needs, a business rule can save us (and the customer) from nasty JavaScript code and enable them to change it later without worry.

The same problem applies to the Portals side of Dynamics. I’ve never worked on a portal project where the OOB features satisfy the client needs. This means any small change like hiding a field or a section needs to be backed up by some Javascript that lives inside the Entity form or the Web Form Step. Even though the needed Javascript can be simple, not everyone is comfortable doing it specially if the Dynamics Admin is not a technical person and honestly, they don’t need to know Javascript.

I though of a configuration-based solution that I call Portal Business Rules. This solution doesn’t have a fancy designer like the Business Rules in Dynamics Forms, but it is configuration based and it is capable of producing/modifying Javascript without the need to write it yourself. This solution has many of the common functionalities that a project needs. That being said, and similar to how client side scripting is still needed on the Dynamics side even with the existence of Business Rules, complex needs will still require Javascript on the portal and the good news is that this complex Java script can coexist with my proposed solution.

The current functionality of the solution is limited to:

  1. Each rule is governed by a single IF/ELSE condition.
  2. The rule works with Entity forms and Web form steps.
  3. Each rule can have unlimited number of actions. Actions include Show/Hide fields. Disable/Enable Fields, Make fields Required/Not Required, Set Field Value, Prevent Past Date and Prevent Future Date (for Datetime fields), Show/Hide Sections, Show/Hide Tabs.
  4. A rule will parse the XML of the related form or tab and suggest the fields/sections/tabs to be used in the rule logic.
  5. For some of the field types (Option sets and two option sets), a suggested value table shows up for ease of use. So instead of figuring out the integer value of an option set field, they will be listed for the user to select from.
  6. The ability to use “In” and “Not In” Operators. For example you can say if an option set value is in “2^3^4” which means if the option set is either of these 3 values, then the condition will hold true.
  7. You can see the generated Java script directly in a special tab.
  8. The Generated Java script for all the rules gets injected into the Entity form or web form step Custom Java script field and it is decorated with special comments to make it clear that this is generated by the solution and not by hand.
  9. When a rule is deleted or drafted, its logic gets removed automatically from the corresponding entity form or web form step.
  10. Basic error handling is added so that when the operands has the wrong value format, an error will show up to tell the user to fix it.

Here is a quick video showing the installation steps:

Here is a simple rule creation demo that shows/hides a tab based on a two option set value:

Another demo of multi action rule, where the Job Title field is shown and becomes required if the Company Name field is populated:

Another demo of how an option set is used in a rule. How error handling works if the operand value is of wrong format.

And finally, the “In” Operator is one of the advanced operators. Here is an example of how we can populate a field if the condition falls into one of a predetermined list of values:

Of course, there are many other possible operations features that you want to check out if you install the solution. Manipulating section visibility, field states (enabled and disabled) and many more.

Many will notice that we can only have one condition in a single rule for now and I’m currently thinking on the best way to associate other conditions to a rule with either AND or OR logical operators between them, similar to how Dynamics 365 Business Rules behave.

To be fair, the best solution for this problem is not my proposed solution but is to make the Business rules that currently exist for Dynamics forms work on the Portal Forms as well, I can say that this solution needs to be done by Microsoft itself as there no much visibility on the Business Rules engine for us,developers. Based on my knowledge, the business rules in Dynamics seem to be built using the Windows Workflow Foundation (from looking at their XAML).

In summary, the problem I’m trying to solve is reducing the need for code further, similar to how Business Rules reduced the need for client side scripting on the Dynamics 365 side. If code is still needed, then my solution and custom code can still live together.

Please refer to my repository on Github for installation steps. Feedback is really appreciated.

NOTE: For the Java script functions that I call in the back-end, I use this existing library on GitHub developed by Aung Khaing .

Update October 16, 2019

During some search, I found out that a company called North52 has a similar solution that was done before and they inject Javascript the same way I do but of course with a nicer interface :). I have a bit more functionality provided. Here is the Link