[Automation] Things I wish I knew Before my First Power Automate Hyperautomation Implementation

Level of Difficulty: Beginner – Senior.

Any toolset comes with it’s own unique tricks and challenges. There is a ‘first time’ for everything which comes with a list of lessons learnt. By sharing these lists, we better prepare the community of developers who may be facing similar challenges. Here’s a summarised list of lessons learnt and things I wish I knew before my ‘first’ enterprise level automation solution implementation with Power Automate flows.

Using Service Accounts

This is by far the most important lesson learnt. The implementation in question was initially investigated by a different team who used their individual/personal accounts for the development of flows. When members of the development team left the organisation, the flows that were associated to their accounts were lost when the developer profiles were deleted which resulted in the flows having to be rebuilt.

A best practice for automation development is the use of service accounts to avoid the situation described above. During the rebuild of the flows, the implementation of service accounts played a crucial role for the foundational layer of the architecture (including the design of reusable components) as well as licensing. Believe it or not, developing on personal accounts can have an impact on how flows were/are licensed.

Default Resolution

The default resolution of an unattended Power Automate Desktop flow is 1024 x 768. When developing a solution using a different resolution, especially when using systems like SAP, the number of field entries available on certain screens differ between resolutions which has a huge impact on the success of a flow. Debugging resolution issues is extremely difficult and could be avoided by correctly setting up remote desktop tools with the appropriate default resolution. There is an option to set the default resolution although the resolution would still need to be configured appropriately for each developer/computer accessing the virtual machine for development purposes.

Impact of API Limits

Since many actions make use of APIs, the actions are subject to whatever API limits are imposed on the service that the API is communicating with. For example, the One Drive API has multiple limits which include file size, call frequency and more. The One Drive actions in the cloud flows that use the APIs also impose those limits. This is relatively well documented but it does require consideration when designing the solution.

Impact of Solution Design on Pricing

Microsoft have multiple licensing plans available for Power Automate solutions. The licensing options expand quite quickly with different add-ons available, some of which include (but are not limited to): AI builder additional capacity, additional user licensing, additional API calls and many more. There is a lot to take into consideration when designing a solution because each process step may have an affect on the pricing model that should be selected for the solution.

In order to appropriately license a solution, the limits and configuration need to be properly understood and taken into consideration. These limits and config items are listed here. Microsoft predominantly offer two main license plans: per flow plan and per user plan, each with their own pros and cons. Per flow plans are more expensive than per user plans and are used to licenses flows individually, regardless of the number of API calls used. The per user plans are cheaper than the per flow plans, however, they are subject to API call limits. On large solutions that need to be scaled, additional API calls may need to be purchased to avoid flows being throttled or disabled until the limit refreshes. Per user plans allow for multiple flows to be licensed under the per user plan.

Impact of Scope Changes on Pricing

It is important to understand the degree of scalability during the design phase of the project. Scaling the solution and adding new scope changes later on may affect pricing due to the way that the Microsoft pricing works, as mentioned above. It is often difficult, bordering impossible, to determine the entirety of scope scalability and changes from the onset which makes this a bit of a catch 22. This makes the mentality of ‘just do getting it working now, we will make it better later’, very dangerous. Technical debt should be avoided as far as possible. The smallest change to an action can have a ricochet affect throughout the rest of the flow, with fields losing values without any clear reason as to why it ‘disappeared’.

Impact of Microsoft Upgrades on the Cloud Flow

Often Microsoft release updates to actions within the cloud flows and if the release creates bugs, the bugs have a direct impact on flows running in production. Debugging these issues can get tedious and often require tickets to be logged with Microsoft. Since there is no ‘rollback updates’ option on actions from a developer perspective, the only way to mitigate the bugs would be to implement a work around where possible.

Importance of Exception Logging

When running a flow in unattended mode it is difficult to keep track of which step in the process is being executed. The original approach to the solution development was to develop the outline of the flow first and add exception handling later which was a horrible idea. The more steps that exist in a flow, the longer it takes to load. This means that when the flow fails and you need to debug it, you need to wait for the flow to load and then go search for exactly what failed. It’s a lot less efficient than having a file that you can monitor to tell you exactly where the flow broke with the reason it failed. My approach has completely changed. Exception handling is now part of the solution backbone.

Importance of a Holistic Architecture

As mentioned in the sections above, the most difficult part about implementing the solution I was tasked with was the design thinking component, trying to predict what changes would occur in the future so that it could be worked into the larger architecture. I was fortunate enough here to be able to lean on my experience with other technologies where we had implemented a few basic mechanisms to ‘future-proof’ the automation. By putting some of those basic components (like an Excel spreadsheet for manual input and overriding of data), the enhancements that popped up later on were much easier to implement and did not require a major amount of solution rework. If you can’t get the holistic architecture into the frame with the first attempt, at least make sure that the basic components are reusable and dynamic enough to be expandable.

Importance of Flow Optimisation

A huge driving factor for implementing automation is time saving and since time is money, time saving impacts cost saving. Runtime is an important factor when calculating time saving. Any unpredictable hikes in runtime need to be decreased and mitigated as far as possible. As soon as an automation runs for longer than the time it would take for the process to be executed manually, there are questions raised around time saving. Now although human time is saved, there’s still the impact on cost saving that needs to be considered. Runtimes can impact the amount of infrastructure needed to run a solution and by optimising the flow, to decrease runtime, the pressure placed on the performance of a flow is eased. The impact of optimisation on the project I worked on involved moving the bulk of the processing to an Azure Function. This had a huge impact on the process runtime – We managed to slice the runtime in half!

Ways to Optimise Power Automate Flows

There are so many ways in which you can optimise Power Automate Cloud and Desktop flows. There were a few things that we tried. A lot of it involved reworking and refining the logic as well as moving the bulk of the processing into a more time and cost-efficient mechanism. I’ve listed a few things I tried to optimise the Power Automate flows in this blog post.

Working with AI Builder

There are so multiple different options available to use when you are looking to train your documents. If you are using invoices, you could consider using the prebuilt Invoice Processor model or you could use Form Extractor to train your own documents. We opted to use Form Extractor which worked well but our use case got rather complex, quite quickly. We had multiple different invoices that originated from different organisations with each organisation prone to different formats. Microsoft, rightly, advise that a model be created for each different format to improve accuracy.

It is also advised that when training the models, don’t assign values in a table to a singular field. It could mess with the output; rather extract the table, as a table, and programmatically retrieve the singular field values. An example of this is when total appears as part of the table, below the last invoice line.

I hope that you can learn from some of the lessons I’ve learnt to improve your own journey. Please feel free to reach out or leave a comment below if there are any specific questions or suggestion you might have.

Published by Jacqui Muller

I am an application architect and part time lecturer by current professions who enjoys dabbling in software development, RPA, IOT, advanced analytics, data engineering and business intelligence. I am aspiring to complete a PhD degree in Computer Science within the next three years. My competencies include a high level of computer literacy as well as programming in various languages. I am passionate about my field of study and occupation as I believe it has the ability and potential to impact lives - both drastically and positively. I come packaged with an ambition to succeed and make the world a better place.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: