Syncing Contact Attributes to B2C Users for PowerApp Portals with Cloud Flows

Note: This article is goes through the process from a Logic app perspective, but in my case I needed to do it in Flow, thanks to the original author.

If you ever used the Power App Portals with the most common authentication mechanism (Azure AD B2C) then you may have come across the problem of portal users frequently changing their email address or contact information or less frequently changing their names from within the portal or by changing these attributes directly in the model driven app by backend users.

The problem with Azure AD B2C user directory is that it doesn’t respect those changes, meaning that those changes don’t reflect back from Dataverse into B2C. The common solution has been custom code. We write a plugin that calls the Graph API to update Azure AD B2C directory. I will follow a similar approach but using a Cloud Flow instead of the plugin, this of course will make any future changes easier and we can add some cool things like retry logic if the request fails or email notification to the administrator etc.

Assumptions:

  1. I assume that you have a portal ready and using Azure AD B2C as its authentication mechanism. If you don’t, check out this very new feature on how to set that up using a wizard.
  2. I also assume that you have administrative privileges to add and configure an application registration in the B2C tenant.

The Solution:

We will build a daemon flow that runs on the update of the contact entity (portal user is a contact). This Flow will execute an HTTP request to the Graph API to update that user record in B2C with the new values from Dataverse. Before building the flow, we need a way to authenticate the flow, best way to do that, is to create an app registration that takes care of the authentication and use that app registrations in our flow.

Part 1: Creating the App Registration

Since we don’t have handy connector for the Graph API that does what we want, we need to issue HTTP requests to the API. To issue these HTTP requests, we need to authenticate with the Graph API. Instead of hardcoding a username and password inside the flow, we will use the Application Registration approach.

Navigate to portal.azure.com and open the B2C directory that you use to authenticate the portal. (It is important that you are in the B2C directory and not your default one). Make not of the B2C Directory name as we will need it later. Search for the Azure AD B2C directory and open it.

  1. Under App Registrations, click New registration.

2. Give your app a name and make sure to select the last option as we are already using user flows to authenticate portal users who mostly will be from outside our organization. Click Register when done.

3. After creating the Application, take note of the Client ID and Tenant ID that appear in the application registration overview page, we will need them later.

4. We need to create a secret for this app as the flow needs it for the HTTP request authorization header. Click on Certificates & Secrets from the left pane, under the Client secrets, click New client secret. Give the secret a name and an expiry date and click Add. When the secret is added, you have one chance to copy its value as it will be hidden from you if you navigate from this page. So, for now, we have the B2C Directory name, Client ID, Tenant ID, and the Secret values saved on the side.

5. This app is almost ready, we just need to fatten its permissions to make it capable of managing some aspects of the B2C tenant that will allow us to solve our problem. Click on the API permissions that you see in the left pane of the previous image and then click on Add a permission.

From the dialog on the right, choose Application permissions because our Flow will be a background process and choose the following permissions ( you can search by permission name):

User.ManageIdentieis.All, User.ReadWrite.All, Directory.Read.All

When you add these permission, make sure to click on “Grant admin consent for YOUR-TENANT_NAME”. Now we are already, let’s move to the Flow side.

Part 2: Creating the Flow

The flow is really simple, use the current environment trigger and configure it to run on update and scope it to organization. All these variables will hold the values we collected previously, you should have everything handy by now except for Auth Audience URI with should be https://graph.microsoft.com. Since we only want this flow to work for contacts who are actually portal users, I’m checking the User Name field on the contact entity. In Dataverse, the contact record has a field called User Name which hold a GUID that represents its Azure AD B2C ID.

Now, for the HTTP action, make sure that the method is PATCH since we will be only updating pieces of the user record. For the URI, you need to use https://graph.microsoft.com/v1.0/users/{Contact Record User Name}.

For the Headers, we need the Authorization header and the Content-Type header. The Authorization is basically the secret we got from the previous steps. The Content-Type is hard-coded as application/json.

We are not done yet with authentication, if you click on advanced settings at the bottom, you need to configure the authentication as shown below using the same variables we collected before.

Now the last part, the body of the PATCH request. In my case, I’m interested in the Email (mail) , First Name (givenname) and last name (surName) because those are the only fields I mapped when I configured the B2C authentication for my portal. This means that if the contact changes their name or email, the corresponding B2C user will get those changes in few seconds as well. For more info on the schema of the JSON you see above in the body field and if you are interested in updating other fields, refer to this documentation.

That’s it! We are now good to go, go and an update a portal user email, first name or last name in the portal profile page or directly in the Model-driven app. In few moments, the changes should reflect in the B2C directly. Here is a very short demo of how this works:

Below is an example video of how this works:

Power Portal Web API Helper – An XrmToolBox Plugin

Web API for Portals is a new feature where you can finally issue Web API calls from your portal to the CDS. This post doesn’t go in the details of this feature, everything you want to know can be found here. The summary is that when you want to enable an entity for web api on the portal, you need to configure few site settings and some entity permissions and then call the Web API’s using some JavaScript code. This plugin is super simple and it just helps you in enabling/disabling the entity and any attribute for that entity for the WebAPI feature on the Portal. In addition to that, it provides you with simple JavaScript snippets that you can use as a starting point in your project.

How to use this plugin?

  1. Install the plugin from the Tool Library in XrmToolBox.
  2. Open the plugin and connect to your organization. The plugin should look like this

3. Click on Load Entities button from the command bar and the system and custom entities will be listed for you. You can search for your entity name in the filter box. You can also notice that a website selection appears in the command bar, you need to select the target website in case you have more than one in your target organization. Enabling an entity on Website A will only create the proper site settings for Website A, if you change the website, you need to enable the entity again as new site settings need to be created.

Note: Microsoft Docs clearly says that this feature should be used with data entities such as contact, account, case and custom entities. This feature shouldn’t be used on configuration entities such as adx_website or other system entities that store configuration data. Unfortunately and to my knowledge, I couldn’t find a very deterministic way for filtering every entity that shouldn’t be used with the Web API so if you know of a way, please do tell so that I can fix the filtering logic. The current logic is to exclude the list provided by Microsoft which consists mostly of the adx_ entities. I also did some judgment calls on other entities that store configuration data and not business data.

4. Once you select an entity, you can either enable or disable it and you also can select the fields you want to expose. When you are done, just click Save changes from the command bar and all the site settings will be created/updated for you.

5. If you are not very involved in JavaScript, and to help you get started quickly, clicking on Generate Snippets in the command bar will provide you with Create/Update/Delete snippets in addition to the wrapper ajax function that you use to issue the web api calls.

6. As an additional small feature, the JSON object you use for Create and Update snippets will have the selected attributes with default values. If you don’t see some of these attributes in your output JSON string, it is mostly because the attribute is not available for Create or Update as the plugin does the check for those conditions. For example, if you are creating an account record, you can provide the accountid in the create call but not in the update call as the accountid is not updatable.

That’s it, a simple plugin that will save you sometime when looking for those attributes logical names and when you want to enable entities for portal web api feature.

Project Link on GitHub: https://github.com/ZaarourOmar/PowerPortalWebAPIHelper/issues

Embed Power BI Visuals in Power App Portals for External Customers

Exposing a report or a dashboard tile from Power BI on a page in your Power App Portal is a supported feature. Not only, you will get the read-only visuals on your page, but you will also get a good amount of power like Natural Language Q/A, exporting data, drilling , filtering, slicing etc.

PowerApp Portal now has a liquid tag called powerbi, with this simple tag we can embed reports and dashboards in a single line of liquid any where on the portal. To get the powerbi liquid tag working, you need to enable Power BI Visualization from the portal admin center (you need to be a global admin).

The powerbi tag accepts the authentication_type, a path parameters for a report or a dashboard, a roles parameter for Row level security roles and a tileid for a specific dashboard tile if needed. Previously, the authentication type allowed only AAD (Azure Active Directory) and Anonymous values. In the AAD case, only users in the company AAD who have the reports shared with them, will have access to view the report (A sign in may be required to load the report), this is a good case for the Employee portal but hardly a good case for any other portal type or if your customers are not in your AAD. With the Anonymous case, you need to make your report fully public in Power BI service which means no authentication what so ever which can be a no-no for some organizations. In the October 2019 release, a new authentication type called powerbiembedded became available and with this authentication type, we can easily support viewing a report from Power BI on the portal by letting the Power App portal itself authenticate with the Power BI Service on the user’s behalf.

{% powerbi authentication_type:”” path:”” roles:”” tileid”” %}

The very first step to enable this mode of authentication is by going to the portal admin center and enable Power BI Embedded Service.

You will be asked to select the work space(s) that includes the reports you wish to expose. Move the workspace to the Selected work spaces and click Enable. This may trigger a portal restart that may take few minutes.

Up until now, the Power BI service and the Power App Portal don’t trust each other. To enable the Portal to authenticate with the Power BI Service, we need to make AAD aware of that. We need to create a security group in AAD and add the portal application to it as a member. This will allow us to tell Power BI Service that anyone in this group is allowed to see what’s inside workspace. To do this, login into https://portal.azure.com and open AAD.

Select Groups

Select New Group

Fill in the group details and click Create

Open the recently created group and click on Members, Add Members.

Search for the portal application and add it as a member. To save time, you can paste the Application ID found under the Portal Admin center in the new member search box and it should filter that portal application only.

Now, we need to tell Power BI that this security group is trusted. To do this, sign in into https://powerbi.microsoft.com and from the gear icon, select Admin Portal.

Under Tenant settings->Developer Settings, add the group you created in the previous step. Any member in this group will have access to Power BI APIs. Click Apply when done.

Our work is done here, what we need now is to expose a visual on the portal. Create a web page with a custom page template. Let the page template point to a custom web template. To show a working example, I created a sample web template called PowerBI and it has the powerbi liquid tag.

Notice that the authentication_type is now powerbiembeded and the path points to a report in my Power BI Service. Note that roles parameter here, I assigned it to some dummy role I created in my report (see https://docs.microsoft.com/en-us/power-bi/service-admin-rls) as It seems like at least one role has to be passed in the liquid tag for the visualization to work. A really good use case of the roles parameter is to have the roles in your Power BI report match the name of your web roles on the Power Portal, then you can control who can see what data based on your web role.

The final result? a report on the Power Portal with the portal taking care of authentication with Power BI Service.

Use Power Platform with On-Premise SQL Databases

Power platform is all built around data. Luckily, this data can reside anywhere, thanks to the Connections feature provided by the platform. If you want your Apps to interact with data from a local database hosted on a server somewhere then this is how you do it. This has a lot of potential from master data management to compliance to regulations and many more. I will go through a simple example starting from creating the database then to building a Canvas App around the data.

Step 1: Create the database and a table (if you don’t have one already). In your server, create a database or use an existing one. In this case, I created a DB called Master Accounts.

To simulate a CRUD operation later, I created a table called Accounts in this database using the following script. Make sure to specify a Primary Key or Canvas App will make your app read only without the ability to add/delete records.

Step 2: Create a user that can access this database. This is achieved by creating a “login” in the security database in your server and assigning this login the created database.

Step 3: To give the Power Platform the ability to interact with you local database, you need to install the On-Premises Data gateway. Notice that this gateway can be configured to work with all the Power Platform apps or only Power BI, of course we need the former option. After you are done, the gateway interface will look like the following image.

Step 4: Sign in with you Power Platform Admin Account:

When done, this is what you should get:

Step 5: Connect to the database from Power Platform. Visit make.powerapps.com and on the left pane, select Data and Connections. Select the New Connection from the command bar.

In the new Connection wizard, select the type of connection to be SQL Server and you should see a window similar to the following:

Of course, you can authenticate in different way, but I choose the SQL Server Authentication mechanism. Plug the values we created in the first two steps. Make sure that the correct gateway is selected.

If all is well, you should see a new connection in the list, mine looks like this:

Now let’s go and create a canvas app from data and see the magic!

Step 6: Create a Canvas App starting from SQL Server Data.

Select the “Accounts” table we created before and hit Connect:

And now you should end up with a Canvas app the can perform CRUD Operations on the local database. It will require some redesign though:)

The nice thing is that this connection is a Power Platform Connection now so you can use it with Flow!

This capability opens a lot of doors for the organizations that are hesitant to move their data up in the clouds, so go experiment with it and use it!

Power Platform and Change Management

Let’s face it, switching users from using their Excel sheets or Access databases toward using one monolithic Dynamics 365 application can be a hard change management process if you have so many users to convince. Sometimes,even the upper management can’t force that change depending on what type of organization it is.

With the new Power platform capabilities , the change management seems to be getting easier and easier because now we have options that we didn’t have before (or we did have but the are currently improved). Once the organization decides that this is the platform to go with, then here are some options that will make it easier to convince the user base to switch.

The simple approach that can be used right away is using the model-driven apps capability of dividing your applications into verticals. If you have one huge application with so many entities, then create multiple apps that are used by different business units or group of users. Each business unit or group should only see what they need to see and in this way, the probability users getting lost in the application is reduced and the amount of training needed for the users is reduced. This also means that error rate will be reduced as well because their options are more limited to what they need only.

With Model-driven apps, and in addition to limiting what entities a user can see, you can also limit what forms, views, charts, dashboards and business process flows. So when you have an entity (like the Case) that is used by multiple groups then each group can see their own forms and views and charts without being overwhelmed with everything else. I won’t call this a security layer but a way of organizing components.

Image result for model driven apps"

If model-driven apps are not enough, then the Canvas Apps are to the rescue. Canvas Apps are new and their concept is new. Unlike model-driven app that seem intuitive to someone who knows the previous versions of Dynamics, Canvas App require a shift in the design mentality. Now we are not talking about a single application that can do many things, but about an application and many other little helper separate applications around it that all feed the same data layer (Common Data Model). So when you create data using a Canvas App, it is possible to view it from Dynamics and vice versa.

The introduction of Canvas Apps adds a new question during to the design process: “Should we implement this module in Dynamics or using a Canvas App?“. This question is becoming an important one because it doesn’t only affect the application architecture but also the user on-boarding experience, training time, error rate and user confidence.

Canvas apps are great when there is a user or group of users who do a limited set of functionalities that can be separated away. Take an example of a service call center agent who just answers the calls, log a ticket and try to solve it or escalate it. You don’t need to train this agent on the whole almighty Dynamics for customer service but only on a screen or two of the Canvas App that she and her team has access to. Keep in mind that Canvas Apps can have more complicated use cases.

So to make the change management process easier, you don’t need to take the users away from their Excel sheet into an application that is a 100 times the size of their Excel sheet but to an application that is almost the same size as their Excel sheet. Success is almost guaranteed in this case.

Using the Calendar Control View in the Unified Interface

Often, we get asked to show records in a calendar view. I personally used the JavaScript-based Full Calendar many times in the past to do that. If your requirement is just showing the records on a calendar with basic functionality then the Calendar control in the unified interface might be your answer.

In the classic interface, we used to have a calendar control on the entity that only works in the Phone and Tablet Layouts. This control basically allows us to view the records on a calendar instead of just showing them in a list.

Moving to the unified interface, the “Web” option is now available. To test that, I created a dummy event entity with Start date, End date and Description fields.

A custom Entity with Start date, end date and description fields.

Then from the controls section on the new entity (use the classical interface designer as this is not available yet on the new designer), add a calendar view, enable it for web and bind the start,end and description fields to the fields we just created above. Note that the description field will show on the calendar, you either can bind it to the name of the record or a custom description field if you want to show more information. Save and Publish your changes.

Add the calendar control and bind the values

Now when you go to view the events, instead of the classical view, you will see a nice calendar view.

The calendar control shows instead of the classical view.

If you like to go back to the normal View list, you can do that from the top right corner.

Business Rules for PowerApps Portals – v1

When it comes to customizing Dynamics 365, I don’t care how we do it, I care about enabling the customers to use the system easily after it gets delivered to them. This of course means if we can get things done by OOB configuration and customization wizards, then it is the way to go, the last option is to write code. One example is the use of Business Rules instead of client side scripting, for simple to medium needs, a business rule can save us (and the customer) from nasty JavaScript code and enable them to change it later without worry.

The same problem applies to the Portals side of Dynamics. I’ve never worked on a portal project where the OOB features satisfy the client needs. This means any small change like hiding a field or a section needs to be backed up by some Javascript that lives inside the Entity form or the Web Form Step. Even though the needed Javascript can be simple, not everyone is comfortable doing it specially if the Dynamics Admin is not a technical person and honestly, they don’t need to know Javascript.

I though of a configuration-based solution that I call Portal Business Rules. This solution doesn’t have a fancy designer like the Business Rules in Dynamics Forms, but it is configuration based and it is capable of producing/modifying Javascript without the need to write it yourself. This solution has many of the common functionalities that a project needs. That being said, and similar to how client side scripting is still needed on the Dynamics side even with the existence of Business Rules, complex needs will still require Javascript on the portal and the good news is that this complex Java script can coexist with my proposed solution.

The current functionality of the solution is limited to:

  1. Each rule is governed by a single IF/ELSE condition.
  2. The rule works with Entity forms and Web form steps.
  3. Each rule can have unlimited number of actions. Actions include Show/Hide fields. Disable/Enable Fields, Make fields Required/Not Required, Set Field Value, Prevent Past Date and Prevent Future Date (for Datetime fields), Show/Hide Sections, Show/Hide Tabs.
  4. A rule will parse the XML of the related form or tab and suggest the fields/sections/tabs to be used in the rule logic.
  5. For some of the field types (Option sets and two option sets), a suggested value table shows up for ease of use. So instead of figuring out the integer value of an option set field, they will be listed for the user to select from.
  6. The ability to use “In” and “Not In” Operators. For example you can say if an option set value is in “2^3^4” which means if the option set is either of these 3 values, then the condition will hold true.
  7. You can see the generated Java script directly in a special tab.
  8. The Generated Java script for all the rules gets injected into the Entity form or web form step Custom Java script field and it is decorated with special comments to make it clear that this is generated by the solution and not by hand.
  9. When a rule is deleted or drafted, its logic gets removed automatically from the corresponding entity form or web form step.
  10. Basic error handling is added so that when the operands has the wrong value format, an error will show up to tell the user to fix it.

Here is a quick video showing the installation steps:

Here is a simple rule creation demo that shows/hides a tab based on a two option set value:

Another demo of multi action rule, where the Job Title field is shown and becomes required if the Company Name field is populated:

Another demo of how an option set is used in a rule. How error handling works if the operand value is of wrong format.

And finally, the “In” Operator is one of the advanced operators. Here is an example of how we can populate a field if the condition falls into one of a predetermined list of values:

Of course, there are many other possible operations features that you want to check out if you install the solution. Manipulating section visibility, field states (enabled and disabled) and many more.

Many will notice that we can only have one condition in a single rule for now and I’m currently thinking on the best way to associate other conditions to a rule with either AND or OR logical operators between them, similar to how Dynamics 365 Business Rules behave.

To be fair, the best solution for this problem is not my proposed solution but is to make the Business rules that currently exist for Dynamics forms work on the Portal Forms as well, I can say that this solution needs to be done by Microsoft itself as there no much visibility on the Business Rules engine for us,developers. Based on my knowledge, the business rules in Dynamics seem to be built using the Windows Workflow Foundation (from looking at their XAML).

In summary, the problem I’m trying to solve is reducing the need for code further, similar to how Business Rules reduced the need for client side scripting on the Dynamics 365 side. If code is still needed, then my solution and custom code can still live together.

Please refer to my repository on Github for installation steps. Feedback is really appreciated.

NOTE: For the Java script functions that I call in the back-end, I use this existing library on GitHub developed by Aung Khaing .

Update October 16, 2019

During some search, I found out that a company called North52 has a similar solution that was done before and they inject Javascript the same way I do but of course with a nicer interface :). I have a bit more functionality provided. Here is the Link

Slim Solution, a Plugin for XrmToolbox

Recently, I was given many unmanaged Dynamics 365 solutions to maintain. The thing I hate about solution is that they can become messy with a click of a button when you add an existing component to it. By Messy I mean there are a lot of things that are not needed but added to the solution. If you add the Case entity, many developers do add the whole case entity even though one or 2 fields are needed to be modified, the rest of the information is confusing and it is not very straight forward to clean that up.

The problem gets worse when you want to build a managed solution out of the unmanaged one. The managed solution needs to be very clear on what it does to what parts of the system. If the managed solution changes one case field, adds a new relationship then those the only changes that need to exist in the solution.

While cleaning out the solutions manually, by looking at their managed exports (you can know if a component has changed by looking at its managed XML export). I decided to write a very small XrmToolbox plugin that helps me in that. I wrote the plugin sometime ago but it took a while to validate it as XrmToolbox has a new lengthy validation process.

The basic idea of the plugin is that it checks all the managed entities that are added into the solution, find which field, form or view is either customized or added to that managed entity. Then it will tell you which components need to be in the solution so that you can remove the rest.

The plugin is called SlimSolution and it is currently available for download in the XrmToolbox. It is still lacking many of the features I want such as checking other component types but I will be adding those in the near future.

An example of the usage of this plugin would be something like this. You create a solution (or other developers do) and you want to clean it up from the unwanted components. As an example, I created the below solution that has:

  1. A custom unmanaged entity
  2. 3 managed entities in which I did add all components to them and metadata.
  3. I modified/added some fields in the account and KB article entities and did nothing to the Agreement entity.

What I want is to clean up this solution by only keeping the managed entities that have been customized.

When you open the SlimSolution plugin, you first load the solutions and hit Check Solution. A somewhat nice summary appears on the right with some details and suggestions on what are the changes that need to stay in those managed entities. Of course and as I mentioned above, the plugin only checks for Forms, Views and Fields for now and gives you the list of components that need to stay in the solution.

You can see that the unmanaged entity is not mentioned because the plugin assumes unmanaged entities are created to be included in the solution (not always the case though but this is the assumption here). You will see information about what needs to stay from the account and KB entities because they were modified. You don’t see anything for the agreement entity which means that whole entity can be removed from the solution.

In addition to the above, if the solution contains some inactive processes/BPFs/Dialogs, it will alert you to remove them from the solution. The code for this plugin is constructed in away that makes adding component validators an easy task which I will do in the near future as I have some other validator ideas in mind.

Tips on Dynamics 365 Plugin Code validation for AppSource Submissions

Not long ago, I was involved in submitting a really complex application built on top of Dynamics 365 to Microsoft AppSource. The application contains a lot of plugins and code activities that perform some complex tasks and automation. The team faced some issues that I think are worth sharing with others to save your time if you you are working on such a submission.

Microsoft, provides us with tools such as the Solution Checker that validates your solution including your plugin and web resource code. The problem is, that’s not all. When you submit an application to the AppSource team, it goes through a rigorous manual and automatic checks using tools that are not publicly available to us, developers. If there are issues in your code, your submission will be rejected with explanation on what to fix and with the list of issues ordered based on their priorities. To pass the submission, all critical and high priority issues need to be fixed (if you can convince the AppSource team that somethings needs to be done a certain way and can’t be done another way, they will mostly make an exception).

After the first submission, the app got rejected with tons of things to modify/fix (even after running the solution checker on all the solutions). To be honest, the documents they sent were scary (1000+) pages with explanations on the issues. After looking at the issue list, it turned out that 90% of the critical/high priority issues are related to writing thread safe plugins. Luckily, the fix was very easy for those issues but it cost us around 2 weeks of time to do another submission and get it verified again. The following are the most common critical issues.

Variables May Cause Threading Issues

A plugin in Dynamics, is a simple class that implements the IPlugin interface, and thus, has a single Execute method as a minimum. Almost always, you need to create the organization service, the tracing service, the context and maybe other object. A bare bone plugin that builds, will look something like this:

public class SomePlugin : IPlugin
{
    public void Execute(IServiceProvider serviceProvider)
    {
        throw new NotImplementedException();
    }
}

A useful plugin, will have extra objects created so that we can communicate with the Dynamics organization,

public class SomePlugin: IPlugin {
 // Obtain the tracing service
 ITracingService tracingService = null;
 IPluginExecutionContext context = null;
 // Obtain the execution context from the service provider.  

 public void Execute(IServiceProvider serviceProvider) {
  tracingService =
   (ITracingService) serviceProvider.GetService(typeof(ITracingService));

  context = (IPluginExecutionContext)
  serviceProvider.GetService(typeof(IPluginExecutionContext));
 }
}

Now what’s wrong with the above plugin code? In a normal .NET application, this is a normal thing to do, but in a Dynamics plugin, it is not. To understand why, we need to understand how plugins get executed on our behalf behind he scenes. When a plugin runs for the first time (because of some trigger), most of the plugin global variables get cached, this happens when the constructor of the plugin is first executed. This means, in the next run, the same tracing service and context “may” be shared with the next run. This applies on any variable you define outside your function as a global variable in your plugin class. Ultimately, this causes threading issues (multiple runs of the same plugin instance compete for the same cached variable) and you may end up with extremely difficult-to-debug errors and unexplained deadlocks. The fix for the above, is very simple, just create your variables locally in the execute function, so each run of the plugin executes its own set of local variables.

public class SomePlugin: IPlugin {
 public void Execute(IServiceProvider serviceProvider) {
 ITracingService  tracingService =
   (ITracingService) serviceProvider.GetService(typeof(ITracingService));

  IPluginExecutionContext context = (IPluginExecutionContext)
  serviceProvider.GetService(typeof(IPluginExecutionContext));
 }
}

This by default means, that any helper function in your plugin should get what it needs from its parameters and not from global variables. Assume you have a function that needs the tracing service, and this function get’s called from the Execute method, pass the tracing service that was created in the execute method to that function and don’t make it a global object.

public class SomePlugin: IPlugin {
 public void Execute(IServiceProvider serviceProvider) {
 ITracingService  tracingService =
   (ITracingService) serviceProvider.GetService(typeof(ITracingService));

  IPluginExecutionContext context = (IPluginExecutionContext)
  serviceProvider.GetService(typeof(IPluginExecutionContext));
// do work here
HelperFunction(tracingService,1,2,"string");
 }

private void HelperFunction(ITracingService tracingService, int param1, int param2, string param3)
{
//use tracing service here
}
}

On the other hand, anything that is read only (config string, some constant number) is safe to stay as a global class member.

Plugins That Trigger on Any Change

This problem is more common. The filtering attributes of a plugin, are a way to limit when that plugin executes. Try to have as few as possible of those filtering attributes, don’t specify all of them. At that time I was involved in that submission, the solution checker wasn’t able to detect such problem but it may have improved now.

Image result for filtering attributes

Plugins That Update the Record Or Retrieve Attributes Again

This is also a common issue, when a plugin is triggered on an update of an entity record, it is really a bad idea to issue another update request to the same record again. An example of this can be the need to update fieldX based on the value of fieldY. When the plugin triggers on fieldY change, you issue an service.Update(entity) with the new value of fieldX. This implicates the performance of the whole organization and even worse, it can cause an infinite loop if the filtering attributes are not set properly. Another, bad use case is to issue a retrieve attributes query for the same record when pre-images and post images can be used to remedy that.

To be clear, sometimes, there is no way around issuing another retrieve inside the plugin or sending a self-update request, we had some of those cases and we were able to convince the AppSource team that our way was the only way.

Slow Plugins

As a general rule of thumb, your plugin should be slim and does a very small thing and does it fast. Plugins have some upper limit on the time they can run within and your plugin should never exceed that time (or not even half of it). When your plugin does exceed the time allocated for it, it is time for redesigning it.

Conclusion

While those issues have simple fixes in general, they can cause slowness and unexplained errors and a rejection from AppSource. Even if you are not submitting anything to AppSource, make sure that you set some ground rules for the developers working on the same code base on how to write good plugins. More on plugins best practices can be found here.