Featured

[Automation] Basic Exception Handling in Power Automate Cloud Flows

Level of Difficulty: Beginner – Senior.

This image has an empty alt attribute; its file name is automation-1.png

Power Automate is a powerful tool that works well for integration and even automating workflows. Citizen developers have started adopting Power Automate to automate some of their day-to-day activities. One of the most important parts of a traditional solution development strategy is error and exception handling.

Automation would typically face two kinds of errors/exceptions: Application Error and Business Exception. A business exception refers to any scenario where the process does not meet the defined business rules that the automation was built to include. An application error refers to any ‘unknown’ (unrelated to business rules) error that occurs within the execution of the solution.

Even as citizen development takes off, there remain to be many advantages realised by implementing exception handling. But how do we do it in Power Automate? Here’s how…

Create a Flow

Navigate to Power Automate Cloud and create a new, blank, flow. As part of the exception handling strategy, a variable, named varErrorOccurred, will be used to store a boolean value indicating that the flow either ran successfully or an error occurred, in which case the flow did not run successfully. Scopes will be used to group process steps separately from the steps to be executed when an error occurs. This refers to the try…catch approach (which often includes a finally code block as well). The try block contains all of the process steps that should be executed while the catch block contains all of the steps to be executed if an error occurs during any of the process steps. The finally block contains and steps to be executed, regardless of whether or not any exceptions occurred.

Setup Scopes for Exception Handling

Let’s start by adding three scope to the flow and name them Try, Catch and Finally:

Exception Handling in Power Automate Cloud

Next, the Catch scope should be set up to run when an error occurs in the Try scope. To do this, click on the three dots in the top-right corned of the Catch scope and select Configure Run After:

Then set up the Finally scope to run after any state by following the same steps above, but this time, for the Finally scope:

Now you can add your actions into the relevant scopes and test your workflow.

Note: You cannot yet initialise variables within a scope, variables need to be initialised at the highest level of the flow, so ensure that you add any Initialise variables actions before the Try scope.

Featured

[UiPath] Preparing for the UiPath Advanced RPA Developer (UiARD) Certification Exam

Level of Difficulty: Junior – Senior.

Since writing the UiPath Advanced RPA Developer (UiARD) Certification Exam, I have received many questions about and requests for assistance with preparing for the exam. So I’ve decided to detail an approach that worked for me.

Some background context – the first time I wrote the certification over a year and a half ago, I’d failed the exam. I spent weeks studying for the exam and it still wasn’t enough. I made many notes about the theoretical concepts which I used as a “cheat sheet” this time around. I redefined my study process and managed to get through everything in about 2 days. The approach I followed is as follows:

Approach to preparing for the UiPath Advanced RPA Developer (UiARD) Certification Exam

UiPath did a great job in providing detail around what to expect from the exam. The resources they have provided through the exam outline PDF and the Academy Training were a huge help. And yet, it’s quite normal to want to start the practise test first and focus on what didn’t go well in the practise test. I did that first too and then I practised the test over and over and over again. I spent a lot of time stressing about whether I really knew the answers or whether I remembered the answers from the previous attempt.

So the approach outlined in the figure above helped me separate the anxiety a little. Let’s break this approach up a little further, in more detail.

Theory Cheatsheet

My first step was to get the hardcore theory out of the way. I developed a cheat sheet for that. You can find that cheat sheet in this GitHub repo:

UiARD Exam Prep Theory Cheat sheet

The above cheat sheet contains the following:

  • State Machines: the states that exist within a state machine and the phases they consist of.
  • Selectors: Full selectors compared to partial and dynamic selectors
  • Input Methods: Comparing default hardware operations, simulate (type and/or click) and send windows messages
  • REFramework: A breakdown on using orchestrator queues and single transactions as well as exception types used within the REFramework.
  • Queue Item Priority: The priority of items can be set based on deadline and actual priority, the cheat sheet explains how these items are processed based on the different concatenations of priority
  • Classes and Objects: The way that code is invoked in UiPath can be referred to as how classes and objects are used. The target object and target type differentiate based on use and declaration. This is explained in slightly more detail
  • Version Control: The different methods of version control supported by UiPath (Git, TFS and SVN) each have different steps associated and are detailed above
  • Logging Levels: The different levels of logging are placed in order of severity with a brief description of each

This cheat sheet does not contain all of the required information for the exam but it does contain the important chunks that had me scratching my head and stressing a little <I know I’m not alone in feeling that right before the exam>. Going through these concepts before answering the practise exam questions will help you identify which areas you need to focus on more after the practise exam.

Practise Exam

Attempt writing the practise exam. Take your time, as you would in the exam. Think about the question and the answers. Use the process of elimination. I found that eliminating incorrect answers based on my practical experience and theoretical understanding was the key to finding the right answer.

With a quick Google search, you’ll find a lot of sites selling exam dumps. I went through some of the sites and practised their questions. At first, I thought it was a great idea until I encountered two different sites asking the same questions but had different answers associated. Which one do you believe? It is for that reason that I would advise against using any external dumps. I have found one or two of the exam questions in the dumps but with the incorrect answers associated. Trust your experience and your skillset. You can get the certification on your own merit!

UiPath Academy

UiPath created a learning path on the Academy Training specifically geared towards equipping you for the UiARD cert. All of the information you need to pass the exam is in that learning path. Watch the videos, practise the exercises and the “check your knowledge” questions – it’s a great way to test your understanding. The UiPath Academy says that the learning course is over 86 hours long. I did not do the practical exercises; I only watched the videos and practised the “check your knowledge” questions and I got through the content in roughly 10-12 hours (provided, I have a few years of practical experience to draw understanding from).

Redo Practise Exam

Once you’ve better prepared through the academy training, try retaking the practise exam with a fresh mind and fresh understanding.

Nisarg Kadam has posted quite a few great videos on YouTube explaining how to prepare for the exam. This video of his actually takes you through the practise exam. I would advise watching it after redoing the practise exam to fill any gaps in your understanding that still exist.

Once you’ve gone through all of the above, get some good rest and take on the final exam.

Final Exam

The Pearson VUE proctors are professional and very skilled. It was a great experience taking the test from home. I had to clear my desk space and make sure my internet connection was stable. Whether you write online or at a test center, remember to take your time, work through the questions one-by-one, use the process of elimination and try not overthink. Sometimes the first answer was the right answer. Don’t go back to change your answers too many times – you may regret it.

Hopefully this approach will help you as much as it helped me. Feel free to reach out or drop a comment below.

Featured

[RPA] Publishing Test Cases From UiPath Studio to UiPath Test Manager

Level of Difficulty: Intermediate – Senior.

This post assumes that the background information and pre-work has already been done. The pre-work mainly refers to the understanding of the business problem, the identification of test cases and unit tests as well as the development of the test cases and unit tests.

A two-part series was released on the UiPath Community blog detailing the use of Test Driven Development (TDD) as an approach to development, specifically in the RPA context. The series is comprised of:

Before Getting Started

Make sure that you have a testing robot available in your environment. You can assign a robot a type of Testing in which case it will only run test cases. Having no test robots assigned within your environment, a ‘pending’ status when for your test case execution will keep occurring.

Creating a Project in Test Manager on the UiPath Orchestrator

Navigate to the Test Manager in the orchestrator and click Create New Project.

Add the Name, Prefix and Description before hitting create.

Once the project has been created, you can select it either by clicking on the Select Project dropdown and selecting the project or by clicking on the project under All Projects.

Now that we’ve set up the test manager, we need to connect it to UiPath Studio

Connecting UiPath Test Manager to UiPath Studio

You’ll need to start by getting the test manager base URL. You can retrieve the URL by navigating to Test Manager on the orchestrator and copying the URL selected below (make sure to exclude /projects):

Navigate back to UiPath Studio. Select Test Manager in the task bar and click on Test Manager Settings:

Now paste the URL and select the project you’ve created on the Test Manager.

Next, let’s prepare the test cases for publishing.

Setting Test Cases as Publishable

In order to get the test cases from UiPath Studio to the Test Manager on the orchestrator, the test cases would need to be Set as Publishable.

Now we can link test cases to the test manager

Each test case needs to be linked to the Test Manager which can be done by right clicking on the test case and selecting the option to Link to Test Manager:

Confirm the prepopulated information before selecting okay.

Don’t forget to Publish your test cases:

When you navigate to the Test Manager on the Automation Cloud, you should see the two new linked use cases.

You can open the test cases by clicking on the from within the Test Manager:

Now let’s try running a use case

Running a Test Case

You can run a test case by navigating to the Orchestrator. Then select Testing and click on the Test Cases tab. Now click on the Run icon next to the test case you want to run:

You can group test cases together using a test set.

Creating a Test Set

To create a test set, navigate to the Orchestrator then select Testing and click on the Test Sets tab. Click New Test Set:

Now enter the Name and Environment, select the Project and hit Next.

Select the Test Cases that you want to group and click Create.

Now that a test set has been created, let’s see how you’d run a test set.

Running a Test Set

Similar to running test cases, you can click on the Run icon next to the set you would like to execute.

Now let’s monitor the test case progress.

Monitoring Test Case/Set Statuses

Under Test Execution which can be located under the Testing tab in the Orchestrator section, you can see the result of all test case and test set runs/executions.

This can also be visualised by navigating to the Dashboard under the Test Manager, which will visualise all test execution results.

Did this help you? Feel free to reach out or drop a comment below.

Featured

[Automation] Starting a K2 Workflow from UiPath

Level of Difficulty: Beginner – Senior.

UiPath has written an activity (UiPath.K2.Activities) that starts a K2 workflow. It seems quite simple at first; the only input properties that it expects are the service URL, username, password and workflow ID. It was ideal for what I was trying to achieve, until it wouldn’t load my workflows from the Service URL. My credentials were definitely correct as they worked with the API, so it had to be the Service URL. I tried multiple different permutations with no success before deciding to pivot to the API instead. So I’m sharing my workaround with you in case you are facing the same challenges I did.

What do we need to do to start a K2 workflow from UiPath?

All of the information that you need to get the HTTP Request working in UiPath can be obtained from Swagger, if it is available to you. To compile your Swagger URL, you’ll need to get your base K2 address and append /api/workflow/v1/swagger/index to it:

https://k2.<organisation&gt;.com/api/workflow/v1/swagger/index

If there are multiple versions of the API you’re working with, substitute v1 with the latest version. You will need to obtain the following in order to make the HTTP Request:

  • The request URL (including the workflow ID)
  • The request body, which might differ slightly from mine
  • Authorised username and password

Getting Started

First thing first – let’s create a template text file that contains the body that we will be using for the HTTP Request:

Now, create a new UiPath project/sequence and make sure you install the UiPath.WebAPI.Activities package so that you are able to make HTTP requests:

Let’s put the sequence together using the following steps:

  1. Read the Template text file
  2. Replace the placeholder text with the actual values
  3. Compose the HTTP Request
  4. Print the status code

The sequence should look something like this:

Now let’s take a closer look at each step.

Read the Template text file

The template text file is stored in the same directory as the UiPath solution and is named ‘New Workflow Template.txt’:

Request Body Template Text File

Replace placeholder text with values

In this scenario, the text file contains the template text to be used as the request body to start the K2 workflow. The placeholders in this case are ‘<title>’ and ‘<value>’. These can be replaced using the string replace function to replace the placeholders with the actual values that should be sent through to start the workflow:

Replace placeholder text with values

Compose the HTTP Request

The most important part of getting an HTTP Request to work is making sure that the properties are correct. We need to set the following properties:

  • AcceptFormat = JSON
  • EndPoint = URL (the workflow API endpoint/URL, with the ID in the URL)
  • Method = POST
  • Body = NewWorkflowTemplate (the variable that stores the request body text)
  • BodyFormat = “application/json”
  • Password = Password (variable storing password – credentials used to auth on Swagger)
  • Username = Username (variable storing username – credentials used to auth on Swagger)
  • Headers = {Accept, In, String, “application/json”; Content-Type, In, String, “application/json”}:
HTTP Request Headers Assignment in UiPath

The output parameters may be set as well. In order to display the status code, it would need to be set to a variable. The properties of the HTTP request should be as follows:

HTTP request properties in UiPath

If you’d like to speed things up a little and change the solution presented in this post to work for your own use case, feel free to fork this GitHub repository and make the changes applicable to your use case. If you get stuck, please feel free to reach out or leave a comment below.

Featured

[Automation] Things I wish I knew Before my First Power Automate Hyperautomation Implementation

Level of Difficulty: Beginner – Senior.

Any toolset comes with it’s own unique tricks and challenges. There is a ‘first time’ for everything which comes with a list of lessons learnt. By sharing these lists, we better prepare the community of developers who may be facing similar challenges. Here’s a summarised list of lessons learnt and things I wish I knew before my ‘first’ enterprise level automation solution implementation with Power Automate flows.

Using Service Accounts

This is by far the most important lesson learnt. The implementation in question was initially investigated by a different team who used their individual/personal accounts for the development of flows. When members of the development team left the organisation, the flows that were associated to their accounts were lost when the developer profiles were deleted which resulted in the flows having to be rebuilt.

A best practice for automation development is the use of service accounts to avoid the situation described above. During the rebuild of the flows, the implementation of service accounts played a crucial role for the foundational layer of the architecture (including the design of reusable components) as well as licensing. Believe it or not, developing on personal accounts can have an impact on how flows were/are licensed.

Default Resolution

The default resolution of an unattended Power Automate Desktop flow is 1024 x 768. When developing a solution using a different resolution, especially when using systems like SAP, the number of field entries available on certain screens differ between resolutions which has a huge impact on the success of a flow. Debugging resolution issues is extremely difficult and could be avoided by correctly setting up remote desktop tools with the appropriate default resolution. There is an option to set the default resolution although the resolution would still need to be configured appropriately for each developer/computer accessing the virtual machine for development purposes.

Impact of API Limits

Since many actions make use of APIs, the actions are subject to whatever API limits are imposed on the service that the API is communicating with. For example, the One Drive API has multiple limits which include file size, call frequency and more. The One Drive actions in the cloud flows that use the APIs also impose those limits. This is relatively well documented but it does require consideration when designing the solution.

Impact of Solution Design on Pricing

Microsoft have multiple licensing plans available for Power Automate solutions. The licensing options expand quite quickly with different add-ons available, some of which include (but are not limited to): AI builder additional capacity, additional user licensing, additional API calls and many more. There is a lot to take into consideration when designing a solution because each process step may have an affect on the pricing model that should be selected for the solution.

In order to appropriately license a solution, the limits and configuration need to be properly understood and taken into consideration. These limits and config items are listed here. Microsoft predominantly offer two main license plans: per flow plan and per user plan, each with their own pros and cons. Per flow plans are more expensive than per user plans and are used to licenses flows individually, regardless of the number of API calls used. The per user plans are cheaper than the per flow plans, however, they are subject to API call limits. On large solutions that need to be scaled, additional API calls may need to be purchased to avoid flows being throttled or disabled until the limit refreshes. Per user plans allow for multiple flows to be licensed under the per user plan.

Impact of Scope Changes on Pricing

It is important to understand the degree of scalability during the design phase of the project. Scaling the solution and adding new scope changes later on may affect pricing due to the way that the Microsoft pricing works, as mentioned above. It is often difficult, bordering impossible, to determine the entirety of scope scalability and changes from the onset which makes this a bit of a catch 22. This makes the mentality of ‘just do getting it working now, we will make it better later’, very dangerous. Technical debt should be avoided as far as possible. The smallest change to an action can have a ricochet affect throughout the rest of the flow, with fields losing values without any clear reason as to why it ‘disappeared’.

Impact of Microsoft Upgrades on the Cloud Flow

Often Microsoft release updates to actions within the cloud flows and if the release creates bugs, the bugs have a direct impact on flows running in production. Debugging these issues can get tedious and often require tickets to be logged with Microsoft. Since there is no ‘rollback updates’ option on actions from a developer perspective, the only way to mitigate the bugs would be to implement a work around where possible.

Importance of Exception Logging

When running a flow in unattended mode it is difficult to keep track of which step in the process is being executed. The original approach to the solution development was to develop the outline of the flow first and add exception handling later which was a horrible idea. The more steps that exist in a flow, the longer it takes to load. This means that when the flow fails and you need to debug it, you need to wait for the flow to load and then go search for exactly what failed. It’s a lot less efficient than having a file that you can monitor to tell you exactly where the flow broke with the reason it failed. My approach has completely changed. Exception handling is now part of the solution backbone.

Importance of a Holistic Architecture

As mentioned in the sections above, the most difficult part about implementing the solution I was tasked with was the design thinking component, trying to predict what changes would occur in the future so that it could be worked into the larger architecture. I was fortunate enough here to be able to lean on my experience with other technologies where we had implemented a few basic mechanisms to ‘future-proof’ the automation. By putting some of those basic components (like an Excel spreadsheet for manual input and overriding of data), the enhancements that popped up later on were much easier to implement and did not require a major amount of solution rework. If you can’t get the holistic architecture into the frame with the first attempt, at least make sure that the basic components are reusable and dynamic enough to be expandable.

Importance of Flow Optimisation

A huge driving factor for implementing automation is time saving and since time is money, time saving impacts cost saving. Runtime is an important factor when calculating time saving. Any unpredictable hikes in runtime need to be decreased and mitigated as far as possible. As soon as an automation runs for longer than the time it would take for the process to be executed manually, there are questions raised around time saving. Now although human time is saved, there’s still the impact on cost saving that needs to be considered. Runtimes can impact the amount of infrastructure needed to run a solution and by optimising the flow, to decrease runtime, the pressure placed on the performance of a flow is eased. The impact of optimisation on the project I worked on involved moving the bulk of the processing to an Azure Function. This had a huge impact on the process runtime – We managed to slice the runtime in half!

Ways to Optimise Power Automate Flows

There are so many ways in which you can optimise Power Automate Cloud and Desktop flows. There were a few things that we tried. A lot of it involved reworking and refining the logic as well as moving the bulk of the processing into a more time and cost-efficient mechanism. I’ve listed a few things I tried to optimise the Power Automate flows in this blog post.

Working with AI Builder

There are so multiple different options available to use when you are looking to train your documents. If you are using invoices, you could consider using the prebuilt Invoice Processor model or you could use Form Extractor to train your own documents. We opted to use Form Extractor which worked well but our use case got rather complex, quite quickly. We had multiple different invoices that originated from different organisations with each organisation prone to different formats. Microsoft, rightly, advise that a model be created for each different format to improve accuracy.

It is also advised that when training the models, don’t assign values in a table to a singular field. It could mess with the output; rather extract the table, as a table, and programmatically retrieve the singular field values. An example of this is when total appears as part of the table, below the last invoice line.

I hope that you can learn from some of the lessons I’ve learnt to improve your own journey. Please feel free to reach out or leave a comment below if there are any specific questions or suggestion you might have.

Featured

[Automation] Working with SQL transactions in UiPath

Level of Difficulty: Beginner – Senior.

When working with databases, the use of transactions is a very healthy habit to get used to. It will save you from those update or delete statements without a where clause that cannot be undone (especially in prod… Imagine…). UiPath presents the Start Transaction activity which claims to do the same as a traditional transaction. In this post, we will compare the functionality of a transaction in SQL and the use of transactions in UiPath.

Basic Principle of Transactions in SQL

A transaction is traditionally used as a SQL concept which allows the developer to test the outcome of a query without applying any changes to the dataset, or at the very least, undoing any changes made to the dataset. A transaction is started by using the BEGIN TRAN syntax and ended with either ROLLBACK TRAN (undoing any changes made to the data) or COMMIT TRAN (applying the changes made to the data).

This is often used in SQL to handle errors and exceptions. The idea behind this is that if a query or stored procedure contains any errors during execution, all changes will be rolled back. In a case where there were no errors encountered, the changes to the dataset(s) could be applied ‘safely’.

The above query consists of a transaction that is rolled back, meaning that the changes to the data are ‘undone’ or ‘not applied’. The first set of results in the output window show the previous state of the data (before any changes were made), the second set of results show the results after the update statement has been applied. The rollback is executed after the second select statement which means that the third data set shows the current state of the data (with the results of the update statement rolled back). If you were to replace ‘ROLLBACK‘ with ‘COMMIT‘, the second and third sets of data would look exactly the same.

So what happens when there’s a statement that has a very clear error in it? Like, for example, updating a field with the value of 0 divided by 0 (which is undefined). Well, the query will error out and the field won’t be applied:

What happens when we add a query that is supposed to succeed, like inserting records:

Okay… So what if we have multiple statements and one fails? How do we undo all changes? Well… That’s where the exception handling comes in:

What would the results look like if we remove the query with the error? Well… The query would succeed and the changes that result from the queries executed within the transaction would be committed:

So how do we use this in UiPath?

UiPath Implementation of Transactions (Start Transaction)

As per the UiPath docs, the Start Transaction activity within the UiPath.Database.Activities package “connects to a database and features a Sequence which can perform multiple transactions with the database. If UseTransaction is set true, the contained operations are executed in a single transaction, and applied at the end if none of the them failed. If UseTransaction is set false, all operations are committed individually. When this activity ends, the connection to the database is closed. This activity returns a DatabaseConnection variable.”

Let’s test this.

Start by creating a new UiPath process which, for the sake of this example, is named TransactionManagement. Ensure that the UiPath.Database.Activities package is installed:

Drag the Start Transaction activity over into the sequence:

Hit Configure Connection, then select Connection Wizard. The Connection Wizard will ask you what data source (or provider) you’d like to use – select Microsoft SQL Server (SqlClient). Add your server name, authentication details and database name.

Ensure that the provider has been selected and is System.Data.SqlClient:

Set the output of Database Connection to a variable; in this instance I named it databaseConnection:

The databaseConnection variable will be parsed through to each of the queries that exist as part of your transaction:

When executed, you will receive the same error:

If you don’t want the error to be thrown, set the ContinueOnError parameter to True:

If you execute the sequence, there won’t be an error thrown. So does this mean that, if there’s an error that isn’t thrown, that the preceding actions will be rolled back? Let’s test this by adding the insert query:

If you execute this, no error will be raised but the insert statement will persist – the action will not be rolled back. So does this approach work? Not quite… So how can I get this working in UiPath?

Suggested Approach

A good design practise would be to separate responsibilities. If SQL is responsible for manipulating data (which it should be), then the transaction should be executed within SQL and not necessarily in UiPath.

This means you would need to create the Stored Procedure in SQL Server:

CREATE PROC ExecuteUpdate AS

	BEGIN

		SET NOCOUNT ON;
	
		BEGIN TRAN

			BEGIN TRY

				delete from AutoForm.ProjectCreation where ProjectName = 'Test'

				select top(5) AuthMethod from AutoForm.ProjectCreation 

				INSERT INTO [AutoForm].[ProjectCreation]
							([ProjectID]
							,[ProjectName]
							,[ProjectPath]
							,[Server]
							,[Database]
							,[Schema]
							,[Tables]
							,[AuthMethod]
							,[Username]
							,[Password]
							,[Created]
							,[Submitter]
							,[DateSubmitted])
						VALUES
							(newid()
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,0
							,'Test'
							,getdate())

				select top(5) AuthMethod from AutoForm.ProjectCreation

				update AutoForm.ProjectCreation set AuthMethod = 0/0

				select top(5) AuthMethod from AutoForm.ProjectCreation

				COMMIT TRAN;

			END TRY 
			BEGIN CATCH
		
				ROLLBACK TRAN;

			END CATCH
		
			select top(5) AuthMethod from AutoForm.ProjectCreation

	END;



Call the stored procedure in UiPath:

Change the Command Type to Stored Procedure and enter ONLY the Stored Procedure name into the Sql parameter.

Did this work for you? Drop a comment below if you’ve faced any issues.

Featured

[Business Intelligence] Connecting a Google Sheet to Power BI as a Data Source

Level of Difficulty: Beginner – Senior.

The market for data collection, visualisation and analysis tools spans across many different providers on many different platforms and in some cases, the best way to become exposed to the variety of tools is by mixing and matching between them. So why not visualise a Google Sheet on Power BI? I learnt quite a bit about connecting Power BI to different sources and I experienced some of the power that Google Sheets offer through this solution. Want to try it out for yourself? Here’s how.

What are the steps?

The steps to using a hamburger menu to hide and show a filter pane in Power BI are:

  1. Create the Google Sheet Dataset in Google Drive
  2. Change the Permission
  3. Publish the Google Sheet to the Web
  4. Connect the Dataset to Power BI
  5. Select the Applicable Dataset

Let’s deep dive into the above steps.

Create Google Sheet Dataset in Google Drive

Go to Google Drive and create a new Google Sheet or upload an existing Google Sheet:

Download the Exoplanet Dataset available on data.world if you would like to follow along with the post. Please feel free to use your own dataset if you have one available.

The dataset is a CSV that needs to be turned into a Google Spreadsheet. Ensure that you are using a Google Spreadsheet and not an Excel or CSV document. For the sake of this post, I’ll only be using the first 2 000 records of the dataset. I would strongly suggest using less records with less columns too. Using all of the data will be extremely slow when connecting to Power BI.

Change the Permissions

In order to access the report from Power BI, the dataset needs to be accessible by anyone with the link, especially when publishing the dataset to the web. Change the permissions by clicking on Share and making it publicly available to anyone with the link:

Publish to the Web

Navigate to file then select Publish to the web:

Make sure that you link the publish to the correct tab (if you do not want to publish the whole document). Hit ‘Publish and ‘OK’ when prompted:

Copy the link:

Connect the Dataset to Power BI

Open Power BI, click on Get Data and select Web as the source:

Enter the copied publish URL (found after publishing) and enter it into the URL section, with basic authentication selected:

Select the Applicable Dataset

Once Power BI has connected to the dataset, you’ll be prompted to select the applicable tables (or tabs/datasets) that you can transform and load. Select the applicable tables then select Transform Data:

To get the correct row promoted as headers, you’ll need to remove the first row then use first row as headers. Please note that this may vary across use cases and datasets.

When you’re happy with the structure of your data, hit Close & Apply:

Did this work for you? Feel free to drop a comment below with any issues you may have faced, or even any suggestions you might have.

Featured

[Automation] 10 Tips for Optimising Power Automate Flows

Level of Difficulty: Beginner – Senior.

This image has an empty alt attribute; its file name is automation-1.png

Power Automate is a powerful tool that allows for integrations and RPA solutions to be developed through the use of Power Automate Desktop and Cloud flows. A disadvantage of using Power Automate is that it can take strain when working with large volumes of data. Lengthy flows take long to save, load and execute. Large amounts of data increase the run time of flows exponentially if they aren’t developed optimally. There are a few things that you could try to optimise a flow and reduce the runtime of a flow while maintaining the functionality of the solution.

Power Automate Desktop Flows

1. Remove Commented (or Disabled) Code

A large part of the RPA development process includes debugging a solution while developing the functionality. Often developers comment out or disable actions instead of removing them. These could act as test harnesses (actions used purely for testing) or safety nets (often used when hoarders of code fear deleting code that previously worked). It becomes quite easy to pollute a solution with disabled actions.

When a PA desktop flow is run, disabled actions are read, even though they aren’t necessarily executed. Removing a few disabled actions might not make a massive dent in process duration but it does make a difference.

2. Build Business Exception Checks

When building UI Automation flows, ‘Send hotkey’, ‘Click’ and ‘Get details’ actions are commonly used with ‘On Error’ actions set up. Send hotkeys are efficient ways of executing actions to work around frequently changing user interfaces, although the ‘Send hotkey’ actions can often get lost between screen changes. This could result in many issues.

A good practice for UI automations would be to execute an action and check that the screen is in the correct state before submitting an action. In a case where the screen did not change as expected or that the executed action did not render the expected result, these checks could return Business or Application Exceptions and immediately ending the sub-flow before proceeding to the environment clean-up phase (closing any applications that may have been opened through the flow). These checks should be built into sub-flows as it can be reused and makes debugging easier.

3. Introduce Sub-flows to Group Reusable Logic

It is a good practice to clean up an environment before and after every process run. The clean up process would refer to the termination of any applications that may be open or be used within the process. The aim of this is to reduce the risk of open applications jeopardising the execution of the process. By cleaning the environment, that risk is mitigated. This is just one example of logic that can be grouped by building a single clean-up sub-flow and invoking or ‘running’ the sub-flow at all points where it is applicable. The deduplication of code declutters a solution and contributes heavily to the advantage of modularisation as a concept in solution development.

4. Ensure That ‘On Error’ Actions are Setup Correctly

As previously mentioned, ‘Clicks’ and ‘Get details’ activities are common with UI automations. These activities come with the ability to configure ‘On Error’ actions. These actions allow for retries, delays, setting variables, executing specific actions and running sub-flows. Ensure that the waiting time between retries is feasible. Decreasing these wait times and rather adding checks could decrease the execution time of a process. By grouping exception handling logic in a sub-flow (see above), the ‘On Error’ option can be configured to execute the exception handling sub-flow instead of waiting for a failure and duplicating code.

5. Reduce the Amount of Waits Being Used

When using a concatenation of hotkeys and clicks, waits (and wait untils) are commonly used to ensure that an application is ready for input. These waits could turn into endless loops and prolong process execution times more than is necessary. By building in checks and limits, these process execution times could be reduced with more indicative measures of where the process encounters errors.

6. Use clicks rather than send hot keys

As far as possible, rather use clicks than send hotkeys when building UI automations. Although send hotkeys execute faster, clicks allow for more stable execution and also allows for the configuration of ‘On Error’ actions.

7. Write Input Data to Textfile

In a case where large amounts of data are being parsed through to the desktop flow from the cloud flow, consider writing the input variable contents to a text file. The aim would be to enable unit testing of Power Automate solutions between Desktop and Cloud flows. The data from the cloud flow would be written to a text file before the process is executed. If the input variables are blank, read the contents from the text files to execute the process with a previous runs data. This might not have a direct impact on execution time, but it does allow for decreasing development and testing times when executing the flows in individual units.

It is important to note that cloud flows execute desktop flows through the gateway at a different pace than desktop flows are executed directly on a computer.

Power Automate Cloud Flows

1. Remove Loops That Process Items Individually

Connectors, like the Excel connector, only allow for items to be processed on a ‘line-by-line’ basis. The more items that there are to process, the longer the process will take to execute. By bulk processing these items, the risk of exponentially increasing process duration is mitigated to some degree. This could be done through introducing other components, like Azure functions (which come with their own challenges). A good example of this can be found in this post.

Although there might still be an increase in process duration when dealing with more data, the increase will not be as exponential as individual connector calls. A lot of the connectors available on Power Automate make use of API calls, some of which have ‘x-calls-per-minute’ or ‘x-calls-per-hour’ limits which could also increase process duration. Depending on your licensing, this might have an impact too. Per user plans have an API limit per day, so by reducing the amount of calls you make, you’re potentially reducing the cost of your flow.

2. Rework Loop Levels

Power Automate cloud allows for a maximum depth of 9 levels. Scopes are really useful when grouping actions logically although they are considered to be a ‘level’ of ‘loops’ which does have an affect on execution times. In a case where logical validations go further down than 6 levels, it would be advised to rather set variables and process logic based on conditional variables rather than adding loops within loops. Especially when adding ‘apply to each’ loops within ‘apply to each loops’. The theory behind the Big O notation explains why this has such a significant impact on process duration in some case. It is advisable that such complex logic be reworked as far as possible.

3. Introduce Sub-flows to Group Reusable Logic

It is important that flows that are built as automation solutions have some form of exception handling and logging. These components can be built as a separate flow to group the logic of handling and logging exceptions which can be reused each time that an exception or error occurs. By logging exceptions, the ability to report on the successes and failures becomes a lot more feasible. This is an example of code that could be grouped and reused.

Do you have any other tips for optimising Power Automate solutions? Drop a comment below.

Featured

[Automation] A Comparison Between UiPath and Power Automate

Level of Difficulty: Beginner – Senior.

Automation is becoming a hot topic in conversations about the fourth industrial revolution (4IR) and fifth industrial revolution (5IR). Automation is referred to as the use of technology to replace human labour or even work alongside people in some way, leading to some sort of benefits (or saving) of having automation implemented. Automation comes in different forms, shapes and sizes with a lot of emphasis placed on Business (or Digital) Process Automation (BPA/DPA) and Robotic Process Automation (RPA).

While RPA is focused on mimicking a person’s actions, behaviour and decisions, BPA and DPA are focused on automating business processes with specific focus on efficiency and optimisation. UiPath was identified by Gartner as a leading RPA technology in 2020 and 2021 with Microsoft also identified as a leader in the RPA in 2021. Microsoft are positioning Power Automate and Power Automate Desktop as a technology stack that enables citizen RPA development.

Magic Quadrant Leader for RPA | Automation Anywhere

UiPath was founded in 2005 with their first desktop automation released in 2013 with Microsoft first releasing Microsoft flow (now known as Power Automate) late in 2016. Power Automate Desktop went into public preview in December 2020. UiPath definitely has the upper hand in maturity, in terms of the RPA offering, with Microsoft displaying much potential for growth.

In their immediate states, there are a few different comparisons that can be made between the two platforms when developing enterprise-level automation architectures.

Enablement Technologies

Both UiPath and Microsoft have ensured that their automation offering forms part of a bigger picture. Microsoft has put together the Power Platform (comprised of Power BI, Power Apps, Power Automate as well as Power Virtual Agents) which integrates well into the rest of the Microsoft technology stack (from O365 right through to Azure). Microsoft has also made sure not to exclude any major third party technology stacks as integration points into the Power Platform.

UiPath have led their offering along a similar trajectory, ensuring that they have multiple products that allow developers to adequately develop artefacts throughout the hyperautomation lifecycle.

One of the similarities of both products is how each product is priced. Due to demand, usage, availability and complexity, pricing differs between each product, with some products being grouped together and others being offered with add-ons subject to different sub-levels of pricing as well. Some of the integrations and automation enablement technologies are compared across the technologies below. Each technology category is specified and the specific products are added in the appropriate columns to denote if that technology is catered for or exists within the landscape with specific reference to additional information added in the comments

Enablement TechnologyUiPathMicrosoftComment
Process Understanding– Task Capture
– Process Mining
– Task Mining
– Power Automate Desktop RecordedUiPath offers three separate products which can be used to help identify and understand the requirements of a process. Microsoft has included a recorder in the Power Automate Desktop product to assist in recording the steps of a process
Data Entry/Forms– UiPath Apps
– Third Party Integration
– Power Apps
– Microsoft Forms
– SharePoint Lists/Firms
– Third Party Integration
Microsoft has multiple mechanisms available that allow a user to enter data to be stored and used as part of the workflow/automation. UiPath has a specific product which is used to gather data to be used within or used to interact with the automation. Both products also allow for third party integrations, like K2, for example.
Data Storage– Data Services
– Storage Buckets
– Third Party Integration (like SQL Server, etc.)
– Dataverse
– D365
– SharePoint
– OneDrive
– Third Party Integration
Microsoft leverages their stack used for data storage. Some of those mechanisms also allow for file or document storage. UiPath also leverages their own products available for data and file storage. Both also utilise third party integration.
Monitoring– UiPath Orchestrator
– UiPath Insights
– Automation Hub
– Third Party Integration
– Power Automate Cloud
– Admin Center
– Third Party Integration
Monitoring solutions can become quite tedious in the Microsoft stack. For more specific visuals or monitoring, custom solutions would need to be created. With UiPath there are multiple mechanisms that allow you to monitor your processes with granular detail.
Test Management– UiPath Test Suite
– UiPath Orchestrator
– Third Party Integration
– Third Party IntegrationMicrosoft does not yet have a defined Test Management portion of the automation offering. UiPath, however, have created a product (Test Suite), specifically geared towards enabling testing and test driven development. The UiPath platform, in its current state, is ready for test driven development as an approach. Microsoft is not quite there yet.
Document Processing– UiPath Document Understanding
– Various OCR technologies
– Third Party Integration
– AI Builder
– Azure Cognitive Services (including form recogniser)
– Third Party Integration
Both products are capable of extracting information from documents. Both offerings come with additional costs.

Each of the above offerings are subject to the proceeding sections.

Functionality

In terms of functionality, UiPath definitely does have the upper hand when comparing the can’s and cannot’s between the UiPath and Power Automate (PA), in both cloud and desktop automation capabilities.

FunctionalityUiPathPower AutomateComment
Create Own ActivitiesYesNoUiPath allows for activities to be developed by the community and published for reuse by other developers. This has helped mature the eco-system significantly, in terms of functionality, and might slow down the progress of Microsoft getting to a more ‘stable’ place with some of the activities/actions.
HTTP RequestsYesNoHTTP requests are useful when using APIs which are way more reliable than front-end automation. This can be implemented from the cloud flows but it doesn’t allow for optimised development of flows. HTTP Requests also requires Premium licensing on Power Automate.
Convenient use of Try Catch blocksYesNoAlthough there is functionality on how to use ‘on error’ blocks, which are the equivalent of ‘try catch’ blocks on PA Desktop, using these are tricky because it still throws errors.
Global Exception HandlerYesNoA global exception handler in UiPath catches any error that is thrown and allows the developer to decide how to deal with these errors on a ‘per solution’ basis. This would come in very handy for any RPA tool, especially where front-end development is taking place and application exceptions can be unpredictable when change is introduced without prior notice.
Renaming of ActivitiesYesNoPA Desktop does not yet allow you to rename actions which makes identifying exactly which action failed really tedious, when looking at the errors returned by the Desktop flow, on the cloud flow.
Pin ‘on error’ next action to subflow or set variableKindaYesPA Desktop allows a number of different options when configuring the ‘on error’ portion of a get element details or click activity. This is something that would need to be handled in a try catch statement on UiPath for most activities. Some activities in UiPath now allow you to configure ‘ContinueOnError’ as well as setting the output UI element which could be used as a ‘On Error’ check.
Scoped variablesYesNoAll variables in PA Cloud and Desktop are ‘global’ and can be accessed from anywhere in the flow, the disadvantage is more memory usage, especially when storing elements with a lot of JSON data (for example). On the other hand, this supports motions presented in DoDAF architectures.
Set input and output variable type to anything other than textYesNoThis makes the use of booleans quite difficult.
Custom creation and control over variables (including type)YesNoIn PA Desktop, the variable creation is done as a result of an object or using set variable. The type is therefor, dependent on the result type.
Use Excel documents without Excel being installedYesNoUiPath allows for a workbook to be referenced and used without the need for Excel to be installed. PA Desktop doesn’t allow for this yet. Hopefully it will soon to avoid additional licensing.
Built-in asset and credential managementYesNot reallyUiPath allows for the storage of credentials and assets on the orchestrator. PA works best when storing credentials in Azure Key Vault which requires Premium licensing.
Built-in queueing mechanismYesNoThe UiPath orchestrator allows for queues to be created, with different triggers which allows for items to be queued for processing based on a multitude of factors, like priority, deadlines, etc. The Power Platform does not have this built in natively yet and requires Azure resources to assist with this (resulting in Premium licensing).
Bulk processing items in ExcelYesNoUiPath allows for the manipulation of Excel data in ranges, meaning that more than one row can be inserted, deleted or updated at the same time. The Excel connector on PA Cloud only allows CRUD on a ‘per row’ basis which is not at all optimal when processing larger files. For time optimisation, different solutions will need to be investigated, like using an Azure Function or some other storage mechanism. Possibly even the extended use of formulas.
Trigger a process when receiving an emailYesYesUiPath makes use of Integration Services to trigger processes in a similar fashion to Power Automate Cloud flows.
Edit selectors or elements of UI AutomationYesNoUiPath allows you to turn selectors into dynamic selectors, making them reusable and less specific. PA Desktop only allows you to add elements as they are and cannot be edited. Getting these elements set up can be more tedious when using PA Desktop than UiPath as ERPs, like SAP (for example), requires more configuration and access than UiPath does as PA Desktop needs transactions to be executed.
Level of restrictionNoYesPower Automate does not allow for a depth of more than 9 levels of scope, loops and conditions which makes sense in terms of performance but without the use of sub-flows (like invoking workflows in UiPath), this becomes difficult to manage on more complex flows on the PA Cloud flows.
Access log runsYesKindaSo to access logs automagically, you need to use some PowerShell. It’s not as easy to access as using an API or database (if the UiPath orchestrator is on-prem).
Detailed time execution of each stepNoYesPA allows you to see how long each action took on the cloud flows run history. To do this with UiPath, you’d need to write some custom logging and calculations.
Decision to upgrade packages and rollback to previous versionsYesNoIt would be useful to be able to rollback to previous versions of connectors when there are bugs that occur which breaks something that was previously working.

Ease-of-use

In terms of the ease-of-use and implementation of different automation categories and capabilities, Power Automate maintains the upper hand for integrations with the Microsoft stack and ecosystem while UiPath maintains the upper hand for categories that make use of more ‘custom’ types of automation.

ActivityUiPathPower Automate
SAP AutomationX
UI AutomationX
Web AutomationX
Mobile AutomationX
Testing AutomationX
Database ManipulationX
Word ManipulationX
Excel ManipulationX
SharePoint ManipulationX
OneDrive ManipulationX
SalesForce IntegrationX
Microsoft Teams IntegrationX
Microsoft Outlook IntegrationX
Microsoft Graph IntegrationX
Azure IntegrationX
Artificial IntelligenceX

Development Time

Power Automate is really powerful when you’re building an integration flow that contains a few actions which use a small number of connectors (between 1 and 5) that encapsulate quick and easy-to-use API calls. As soon as the flow is expected to deal with a few additional business rules or process steps, the platform loses efficiency. The more actions are used, the slower the flow takes to load (especially when trying to debug historical runs). UiPath maintains the upperhand when developing more complex solutions. It is much easier to build modular components and bring them all together using UiPath. Developing a solution in modules can reduce development time. Waiting for a platform to respond can have a drastic affect on development time. It is easy to get distracted, as a developer, waiting for a flow to load or to run a test.

FeatureUiPathPower Automate CloudPower Automate DesktopComment
Unit TestingXXUiPath allows you to create sequences and workflows in units which can be invoked within the main workflow. It is difficult to effectively invoke subflows in PAC, especially taking licensing implications into account.
DebuggingXXUiPath allow developers the option to debug or run a solution which allows the developer the ability to either fast track the testing process by running the process, while watching the logs or debugging each step of the solution run through debug mode. UiPath allows the developer to debug activities, workflows and sequences in units, which speeds up the testing and development process. Although PA Desktop allows for the ‘Run from action’ functionality, it is not available on PAC.
Realtime LoggingXUiPath allows for information to be logged at different levels (like trace, verbose, information, warning and error) while PAD and PAC only displays warnings (more common on PAC) and errors. By allowing developers to monitor logs as the flow runs, it is easier to determine which steps have been successfully passed and which steps the process might be stuck on.
Developer UI ExperienceXXIt is tedious to work on PAC in a browser window because the platform responds differently between browsers. Sometimes browsers crash and all work is lost. One cannot save the flow if there are errors. Commenting out (or disabling) code is not yet allowed on PAC, however, there is a work around available. This is not efficient as it could allow developers the ability to save their work more frequently.

The copy and paste functionality on PAC is buggy and doesn’t always work.

The above two points are not applicable in UiPath

Monitoring

UiPath and Power Automate both have different kinds of monitoring available to developers and operational support. The variety of monitoring tools and mechanisms made available by UiPath have the advantage over the mechanisms provided by the Power Platform as they allow for monitoring automations in finer detail.

FeatureUiPathPower AutomateComment
View exactly which item failedYesYes, but it is tedious to monitor flows on solutions with high volumeAlthough Power Automate does allow for flows to be monitored from the monitor tab via the online portal, it would be very difficult to track exactly which items had issues, especially when desktop flows are called from the cloud flow where the desktop flows have quite a lot of exception handling built in. The Monitor tab does allow you to view failed cloud flows as well as monitor desktop flows but linking the cloud flow that triggered the desktop flow is tricky.
Monitor realised benefits and savingsYesNoThis can be managed in multiple ways. UiPath offers this level of monitoring as part of the UiPath Insights and Automation Hub offerings. Both require additional licensing, however, the option is available.
Monitor process logsYesYes, kindaUiPath allows for logs to be viewed at different levels while the process is being executed in UiPath Studio as well as the UiPath Orchestrator. UiPath queue items also store individual logs where queue items are processed. Information level logs may be used to log information as the process is being executed.
 
In order for this to work for Power Automate flows, one would need to write a custom logging solution which would affect any per user flow licensing plans in cases where these logs are called from the cloud flow.

Scalability

The scalability of a solution should be monitored and compared on a case-by-case basis. Microsoft recommend creating multiple flows or processes to better manage scalability, whereas UiPath take a different approach by segregating a process on a queue item level where one process can execute multiple items in a single execution run. In the table below, X denotes the recommended technology for each feature listed below:

FeatureUiPathPower AutomateComment
CostXScaling solutions in UiPath is cheaper than in Power Automate as licensing remains mostly the same. The impediment on cost, when using UiPath, is introduced when additional infrastructure is needed or when additional users need to be licensed. In cases where Document Understanding units are fully utilised, additional credits may be purchased from UiPath. This would also be the case when AI builder credits are finished.
 
Power Automate requires additional licensing when a solution is scaled. It is important to understand exactly to what extent the solution needs to be scaled when architecting the solution as any technical debt would impact the solution licensing and costing. Additional API calls would need to be purchased when using a per user plan. In a case where multiple instances of the flow need to be created, each flow would need to be licensed on a per flow plan. There are limits on the number of processes that may run concurrently.
Additional Resources for OptimisationXIn order to reduce run time and enhance performance/processing time, Azure resources (like Azure Functions) may need to be used to assist in scaling the solution in an optimised fashion when using Power Automate. Although scripts and code blocks may be incorporated into UiPath solutions, it is not necessary for additional resources to be used in such solutions to optimise the solution.

Cost

In terms of cost, Power Automate maintains the advantage in cases where the scope is small and is inline with (only) integration tasks. As soon as the scope becomes more complex and the solution needs to scale, the cost starts to increase. The licensing options for Power Automate vary based on the scope of the solution design. The two main options are a per user plan or a per flow plan. Each have their own limitations. A per user plan has a limit of API calls that can be made from the flow per day which impedes on large scale solutions. Additional API calls may be purchased from Microsoft. The per flow plan only allows one flow to be created which means that for scaled solutions where you need multiple flows to run in parallel (due to concurrency processing limits), additional flow licenses would need to be purchased.

UiPath maintains the advantage of enterprise level where multiple processes are developed and orchestrated within the environment. The UiPath Orchestrator is expensive and incurs a large annual cost. When that cost is distributed across multiple processes, there is a closer movement towards realised savings.

Based on the above, the cost recommendation for this solution is summarised below, with X denoting the recommending technology for each category:

FeatureUiPathPower Automate
Long term cost – not scaledX
Short term cost – not scaledX
Long term cost – scaledX
Short term cost – scaledX

Summary

Based on all of the comparisons addressed in this post, the table below provides a summary on the recommendations below:

CategoryUiPathPower Automate
FunctionalityX 
Ease of useX 
Development timeX 
MonitoringX 
ScalabilityX 
Long-term costX 
Short-term costX 

As seen by the table above, UiPath has the advantage in all of the categories observed above. Power Automate will render a short-term cost saving in non-scaled solutions. In a case where multiple processes will be built, using UiPath would be a suitable solution. It is important to note that Power Automate and UiPath (among many other automation technologies) may co-exist in the same automation ecosystem in ways in which they could compliment each other. Please note that some of the above aspects may be improved by either (or both) vendor(s) in the future which may change the recommendation, however, at the time of recommendation, this is how the two technologies match up when compared with one another. These recommendations are based on the lessons learnt from our previous experiences with UiPath and Power Automate implementations.

Power Automate adds the most value when utilising the licensing models that are already in use within an organisation to build automations that do not require additional (or premium) licensing and more importantly, do not exceed any limits imposed by additional licensing (like scaling flows that are licensed on a ‘per user’ plan). There are multiple ‘basic’ templates that allow for quick and easy automations and integrations. Power Automate Premium licensing allows solutions to be built with more advanced capabilities. Although Power Automate solutions are scalable, doing so can be quite tedious. The Power Platform does allow for multiple licensing options.

UiPath adds the most value in an environment where an automation eco-system is being formed to cater for enterprise-level solutions that should be easily scalable. The UiPath offering includes a wide range of products that supports their matured automation and hyperautomation lifecycle. There are multiple licensing models available for the offerings provided by UiPath.

There is a lot of value that can be gained by implementing a hybrid approach to automation where UiPath and Power Automate are used, in conjunction, to provide powerful integrations and automations within an organisation’s Automation Center of Excellence (CoE). Both technologies have a lot of potential, with exponential growth predicted within the next few years. Each have their own benefits and shortfalls which make them competitive in their own ways.

Apart from the licensing aspects, the functionality, ease-of-use and implementation observations are also applicable to Azure Logic Apps.

Please note that all of these were findings captured at a specific point in time and may have been improved since the post was published.

Featured

[Azure] In-Stream Excel Manipulation in .NET Core, using Azure Blob Storage and Azure Functions

Level of Difficulty: Junior – Senior.

So you have an Excel template stored in Azure Blob Storage which will be read and manipulated by inserting data into the Excel template returning the manipulated file in stream. This can either happen (Time Trigger) on a schedule or per request (HTTP Trigger) basis. The decision is to build the logic out using a C# Azure Function. The data to be inserted into the Excel Template will be sent into the function as the request body.

The Azure Function will form part of a larger solution which will send the data into the Azure Function. The file content of the Excel template will be read from Azure Blob storage the data received in the request body will be inserted before sending the new file content back in the response body.

A possible use case of this could be bulk processing a file that is being used in a Power Automate cloud flow. The existing Excel connector allows for single entries into an Excel document once the data has been manipulated which can become quite time consuming. To optimise such a solution, the Azure Function could handle the data validation, manipulation and bulk entry in a much more timely manner.

Depending on how the consumption requirement, an Azure Function could be a rather cost-effective solution. Azure Functions can be licensed using an app service plan, a consumption plan or a premium plan. The consumption plan comes with a free grant which allows a free grant of 1 million executions per month, per subscription. When deploying an Azure Function, a Blob Storage resource is required. In a case where a Blob Storage resource does not exist, one will be created when deploying a new Azure Function from Visual Studio. App Insights are not required when deploying an Azure Function, however, it is an extremely useful resource to use when debugging the Function.

What are the steps?

The steps that we will be following are:

  1. Create Azure Function Project (Using .NET Core 3.1)
  2. Create Excel Document
  3. Upload Document to Blob Storage
  4. Add Functionality to the Azure Function
    1. Install Packages
    2. Create Models
    3. Connect Blob Storage
    4. Add Manipulation Logic
  5. Test
  6. Publish

Let’s dive deeper.

Create an Azure Function Project

For the creation and deployment of the Azure Function, we will be using Visual Studios. Before you open Visual Studio, open the Visual Studio Installer and ensure that Azure development is installed, as well as ASP.NET and web development:

Once those have been installed and VS has been launched, Create a new project, select C# Azure Function

Configure your project appropriately before hitting Create. Note that the name of your project does not necessary depict the name of your function. That can still be changed.

Ensure that you have .NET Core 3 installed as Azure Functions currently support 3.1 and not 2.1. You can install the .NET Core 3 SDK here. Select Http trigger and set up the blob storage account or use the emulator if you don’t have a blob storage account then setup the authorization level, before hitting Create.

Let’s rename the Function1.cs file to ProcessInformation.cs. Ensure that you change the class and function name too, before saving the changes (Ctrl+S):

Next, let’s create the Excel Document.

Create an Excel Document

For the purpose of this use case, we will be creating an Excel document, named Information, with one sheet, named Details, with a table named PersonalDetails. The PersonalDetails table will contain three columns, namely: Name, Surname and Date of Birth.

To insert a table on a worksheet, navigate to the Insert tab, select Table then select the range and check the ‘My table has headers’ checkbox, before hitting OK:

Once the table has been created, change the column names. To change the table name, focus on the table by clicking on one of the cells in the table, the Table Design tab will pop up. Navigate to the Table Design tab and change the Table Name:

Save the document and prepare for upload to Blob Storage.

Upload Document to Blob Storage

For the sake of this post, I am going to assume you already have a Blob Storage resource setup. If you don’t, check out this post on how to create blob storage and upload files to a container.

Navigate to your blob storage account and relevant container before selecting Upload. Navigate to the file that you would like to upload and select Upload.

Let’s get back to the Azure Function and start adding functionality.

Add Functionality to the Azure Function

Before we can add the necessary functionality to the Azure Function, we first need to install the relevant packages, using NuGet Package Manager.

Install Azure.Storage.Blobs

Azure.Storage.Blobs is used to read files from and write files to the relevant Azure Blob Storage Account, based on the connection string provided.

Navigate to the NuGet Package Manager (Tools > NuGet Package Manager > Manage NuGet Packages for Solution…), then search for Azure.Storage.Blobs and install the latest stable version of the package.

Install ClosedXML

ClosedXML is used to manipulate the Excel document.

Navigate to the NuGet Package Manager (Tools > NuGet Package Manager > Manage NuGet Packages for Solution…), then search for ClosedXML and install the latest stable version of the package.

Now that the libraries are installed, let’s create the models that we need to define for the input data that will be received from the request body as well as the data structure for the Excel table.

Create PersonalDetails Model

The PersonalDetailsModel will be used to represent the structure of the data to be written to the Excel table named PersonalDetails.

The code for the model can be added as a partial class to the bottom of the Process Information class. The code for the model should look something like this:

partial class PersonalDetails
{
    [JsonProperty("Name")]
    public string Name { get; set; }
    [JsonProperty("Surname")]
    public string Surname { get; set; }
    [JsonProperty("DateOfBirth")]
    public string DateOfBirth { get; set; }
}

Create InputData Model

The InputDataModel will be used to represent the structure of the data to be received as the request body. The information passed through as the request body needs to be written to the Excel table named PersonalDetails.

The code for the model can be added as a partial class to the bottom of the Process Information class. The code for the model should look something like this:

partial class InputData
{
    [JsonProperty("PersonalDetails")]
    public List<PersonalDetails> PersonalDetails { get; set; }
}

In order to use the List capability, ensure that you’ve added System.Collections.Generic to your using statements.

Read File From Blob

Ensure that you add Azure.Storage.Blob to the using statements as a reference before creating the method that will read the file from the storage account.

You can create a class named BlobStorage beneath the ProcessInformation class which will contain the method, named ReadTemplateFromBlob, which reads the file from your storage account and returns the content as a MemoryStream. The class, with the method will look something like this:

class BlobStorage
{
    public MemoryStream ReadTemplateFromBlob()
    {
        // TO DO: Add Connection String and move into secure storage
        string connectionString = "<connectionstring>";
        string container = "excel-templates";

        // Get blob
        BlobContainerClient blobContainerClient = new BlobContainerClient(connectionString, container);
        var blob = blobContainerClient.GetBlobClient("Information.xlsx");

        MemoryStream memoryStream = new MemoryStream();

        // Download blob
        blob.DownloadTo(memoryStream);

        return memoryStream;
    }
}

The reason we create a class is to allow for the expansion of the code to encapsulate any other functionality that is specific to BlobStorage.

Add Excel Processing Logic

Now, let’s create some functionality that will allow us to write data into the Excel document. Don’t forget to add the ClosedXML.Excel to the references.

Create a class named ExcelProcessing beneath the ProcessInformation class which will contain the method, named PopulateExcelTable, which will take in the data to be written to Excel along with the name of the sheet and table that the data should be written into. The method will return the content as a MemoryStream. The class, with the method will look something like this:

class ExcelProcessing
{
    public MemoryStream PopulateExcelTable (List<PersonalDetails> personalDetailsList, MemoryStream stream, string sheetName, string tableName)
    {
        // Link to existing workbook
        using var wbook = new XLWorkbook(stream);

        // Link to sheer
        var ws = wbook.Worksheet(sheetName);

        // Set row offset
        int currentRow = 2;
        
        // Replace data in existing Excel Table
        var table = wbook.Table(tableName);
        table.ReplaceData(personalDetailsList, propagateExtraColumns: true);

        // Save file
        wbook.SaveAs(stream);

        // Create new stream
        MemoryStream memoryStream = stream;

        // Return stream
        return memoryStream;
    }
}

The reason we create a class is to allow for the expansion of the code to encapsulate any other functionality that is specific to ExcelProcessing, like reading from Excel, for example.

There is some brilliant functionality available for further manipulating an Excel document, including the use of formulas and formatting.

Now let’s bring this together in the main function.

Add Function Logic

The Azure Function is going to be used to receive the request body, deserialise it and parse it through to the ExcelProcessing class to populate the Excel table. Once the table has been populated, the content of the Excel document will be returned as a MemoryStream. The code for the static function class should look a little something like this:

public static class ProcessInformation
    {
        [FunctionName("ProcessInformation")]
        public static async Task Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request to process information.");

            try
            {
                // Create blank request body
                string requestBody = String.Empty;

                // Populate request body
                using (StreamReader streamReader = new StreamReader(req.Body))
                {
                    requestBody = await streamReader.ReadToEndAsync();
                }

                log.LogInformation("Request body received.");

                // Convert JSON string to object
                var data = JsonConvert.DeserializeObject<InputData>(requestBody.ToString());

                log.LogInformation("JSON deserialised and parsed through model.");
                
                // Create new list of PersonalData and populate with InputData
                List<PersonalDetails> personalDetailsList = data.PersonalDetails;

                log.LogInformation("New list of personal details created.");

                // Save Excel Template file from Blob Storage
                BlobStorage blobStorage = new BlobStorage();
                var stream = blobStorage.ReadTemplateFromBlob();

                log.LogInformation("Template retrieved from blob.");

                // Populate Excel Spreadsheet
                ExcelProcessing excelProcessing = new ExcelProcessing();

                var excelStream = excelProcessing.PopulateExcelTable(personalDetailsList, stream, "Details", "PersonalDetails");

                log.LogInformation("Excel populated");

                // Create response
                var response = req.HttpContext.Response;
                response.ContentType = "application/json";

                // Create stream writer and memory stream
                using StreamWriter streamWriter = new StreamWriter(response.Body);

                // Add the memory stream to the stream writer/request.Body
                await streamWriter.WriteLineAsync("[\"" + Convert.ToBase64String(excelStream.ToArray()) + "\"]");
                await streamWriter.FlushAsync();
            }
            catch (Exception e)
            {
                log.LogInformation($"Error processing information: {e.Message}");
            }
        }
    }

You can test the solution locally, using Postman.

Test the Azure Function

Ensure that Postman is installed then run the Azure Function. Grab the URL that pops up in the command line and paste that into Postman:

Hit the body tab, click on the raw radio button, change the type to JSON and paste the following into the body:

{
    "PersonalDetails": [
        {
            "Name": "Jacqui",
            "Surname": "Muller",
            "DateOfBirth": "1996/07/27"
        },
        {
            "Name": "John",
            "Surname": "Doe",
            "DateOfBirth": "2000/01/01"
        }
    ]
}

When you hit Send, you should see a response similar to this:

This response can be converted to binary to save the file which can be incorporated into the Power Automate flow.

Lastly, we need to publish this solution to Azure from Visual Studio.

Publish

Navigate to the solution explorer and double click on Connected Services. If you don’t see connected services, right click on the function and select Publish:

Set up your publish profile by selecting Azure as the target of the deployment, then hit Next:

Next, select Azure Function App (Windows), before selecting Next:

If you don’t already have a resource, add a new one:

Fill in and validate all required details, before hitting Create:

Once the resource has been created, you can hit Publish:

Once your Azure Function has been successfully published, you may test the function using Postman and enhance as you see fit.

This solution is available on Github.

Did this help you? Did you get stuck? Feel free to reach out by leaving a comment below.

Featured

[Automation] Using Google Drive, Google Sheets and Python in Google Colaboratory to Automate Rubric Creation and Synchronisation

Level of Difficulty: Intermediate – Senior.

Have you ever run into a situation where you have over 200 students whose work needs to be marked and the admin around it is enough to make anyone go crazy? Well here’s a step towards getting around the labour-intensive admin that goes hand-in-hand with marking student projects, using Google Drive, Google Sheets and a Google Colaboratory Python script.

What are the steps?

The steps that we will be following are:

  1. Setup the project in the Google Developer Console
    1. Create a service account
    2. Download the service account key file
  2. Create the Google Drive folder and content
    1. Create a new Google sheet for the rubric template
    2. Create a new Google sheet for the class list
    3. Create a new Google sheet for the class results
  3. Create the Python script
    1. Create ‘Replicate Rubric for Students’ script
    2. Create ‘Update Class Results’ script

Deep Dive

Let’s dive deeper into the steps listed above.

Setup Project in Google Developer Console

Create Service Account

Create a New Project in the Google Developer Console:

Make sure your newly created project has been selected before you start enabling APIs and creating service accounts:

Under APIs & Services, navigate to Credentials:

Add a new Service account:

Give your service account a name:

Retrieve the Service Account Key

Edit the service account you’ve just created in order to create a key:

Select Keys, then select Create New Key from the Add Key dropdown:

Select JSON then click Create and Save the Key.json file to your computer:

Enable APIs

Navigate to and enable the Google Drive API:

Navigate to and enable the Google Sheets API:

Create Google Drive Folder and Content

Please note that all the Google Drive Content is available in the GitHub Repo if you’d like to upload the files I’ve used for this post.

Navigate to Google Drive and Create a root folder that you will use to store all of your collateral:

Share the folder you’ve just created with the service account that you’ve also just created:

Create a new Google Sheet to store the template of your rubric to be replicated for each student:

The template rubric may look something like this:

In this case, the Indicator field will contain either a 1 (for yes, criteria is present in student submission) or a 0 (for no, criteria is not present in student submission), resulting in the mark either equating to the weight of the criteria, or 0. The formula of the mark field may need to be adapted to cater for your needs if you are using a sliding scale form of evaluation.

Next, create a new Google Sheet to store the list of student information to be used:

Create a new Google Sheet named Results to store the class results:

Create Python Script

Replicate Rubric for Every Student

You’ll first need to run the following commands to ensure that the following libraries are installed:

pip install oauth2client
pip install PyOpenSSL
pip install gspread
pip install pydrive

Then you’ll want to import the following libraries:

import pandas as pd
import gspread
import io

from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from oauth2client.service_account import ServiceAccountCredentials
from google.colab import files #comment out if you aren't using Google Colab

Next, you’ll want to initialise your authorisation with Google, replacing ‘<YOUR KEY FILE PATH>‘ with the path to the service account key downloaded from the Google Developer Console:

# Init Google (with Auth)

file_path = r"<YOUR KEY FILE PATH>.json"

scope = ['https://spreadsheets.google.com/feeds',
         'https://www.googleapis.com/auth/drive']

credentials = ServiceAccountCredentials.from_json_keyfile_name(file_path, scope)

# Used to auth Google Sheets
gc = gspread.authorize(credentials)

# Used to get all files in Google Drive folder
gauth = GoogleAuth()
gauth.credentials = credentials
drive = GoogleDrive(gauth)

In order to create a rubric for each student, you’ll need to access the populated class list created in the Google Drive Folder:

# Open Spreadsheet
spreadsheet = gc.open('Class List')

# Get Student Class List
student_list_sheet = spreadsheet.get_worksheet(0)
student_list = student_list_sheet.get_all_values()
student_headers = student_list.pop(0)

# Read Student Class List into DataFrame
df_students = pd.DataFrame(student_list, columns=student_headers)

df_students

For every student in the class list (df_students), create the rubric. Replace ‘<YOUR FOLDER ID>‘ with the folder ID in which you’d like to store all of the student rubrics. It would be advisable to create a subfolder, named Student Rubrics in the Google Drive root folder:

template_spreadsheet = gc.open('Template_Rubric')

detailed_rubric = template_spreadsheet.get_worksheet(0)
rubric_summary = template_spreadsheet.get_worksheet(1)

for index, element in df_students.iterrows():
  
    print(element['Student Number'])

    # Create Workbook
    workbook = gc.create('{}_Rubric'.format(element['Student Number']), folder_id='<YOUR FOLDER ID>')

    # Update Rubric Path (in Student Class List)
    sheet_id = workbook.id
    sheet_url = 'https://docs.google.com/spreadsheets/d/{}/'.format(sheet_id)

    # Update Student Rubric Path
    student_list_sheet.update('D{}'.format(index + 2), sheet_url)

    # Duplicate Spreadsheets
    detailed_rubric_worksheet = detailed_rubric.copy_to(spreadsheet_id=workbook.id)
    rubric_summary_worksheet = rubric_summary.copy_to(spreadsheet_id=workbook.id)

    # Delete Sheet1
    worksheet = workbook.sheet1
    workbook.del_worksheet(worksheet)

    # Get Duplicated Spreadsheets
    student_spreadsheet = gc.open('{}_Rubric'.format(element['Student Number']))
    detailed_rubric_worksheet = student_spreadsheet.get_worksheet(0)
    rubric_summary_worksheet = student_spreadsheet.get_worksheet(1)

    # Update Sheetnames
    detailed_rubric_worksheet.update_title('Detailed Rubric')
    rubric_summary_worksheet.update_title('Rubric Summary')

    # Update Student Information
    detailed_rubric_worksheet.update('B1', str(element['Initials']))
    detailed_rubric_worksheet.update('B2', element['Surname'])
    detailed_rubric_worksheet.update('B3', element['Student Number'])

    # Update #REFs in Student Rubric - Student Information
    for row in range(1, 5, 1):

        # Get Formula  
        cell = rubric_summary_worksheet.acell(f'B{row}', value_render_option='FORMULA').value

        # Set Formula
        rubric_summary_worksheet.update(f"B{row}", cell, raw=False)

    # Update #REFs in Student Rubric - Student Summary Marks
    for row in range(7, 19, 1):

        # Get Formula    
        cell = rubric_summary_worksheet.acell(f'D{row}', value_render_option='FORMULA').value

        # Set Formula
        rubric_summary_worksheet.update(f"D{row}", cell, raw=False)

The above code block will not only create a new rubric, but also fill in the information available in the class list and correct any corrupt formulas. If you change the contents of your rubric, please pay careful attention to the indexes used above to update formulas. If you are not using formulas extensively, feel free to remove the formula manipulation code.

Update Class Results

Create a method, named GetData, which will pull of the rubric data from the student rubrics. This method will be reused to process the data from every student’s rubric file stored on the drive:

def GetData(workbook):
    
    # Open Spreadsheet
    spreadsheet = gc.open(workbook)

    # Retrieve Student Number From Spreadsheet
    student_number = spreadsheet.get_worksheet(1).get('B3') # saves as list of list
    student_number = student_number[0][0]

    # Get Detailed Rubric
    detailed_rubric = spreadsheet.get_worksheet(0).get('A6:H42')
    rubric_headers = detailed_rubric.pop(0)

    # Get Category Rubric
    category_rubric = spreadsheet.get_worksheet(1).get('G6:J9')
    category_rubric_headers = category_rubric.pop(0)

    # Get Sub-Category Rubric
    sub_category_rubric = spreadsheet.get_worksheet(1).get('A6:E17')
    sub_category_rubric_headers = sub_category_rubric.pop(0)

    # Get Total
    total = spreadsheet.get_worksheet(1).get('I10') # saves as list of list
    total = total[0][0]

    # Read Rubrics into DataFrame
    df_category_rubric = pd.DataFrame(category_rubric, columns=category_rubric_headers)
    df_sub_category_rubric = pd.DataFrame(sub_category_rubric, columns=sub_category_rubric_headers)
    df_detailed_rubric = pd.DataFrame(detailed_rubric, columns=rubric_headers)
    
    # Return all of the dataframes, the student_number and the total
    return df_category_rubric, df_sub_category_rubric, df_detailed_rubric, student_number, total

Create a method, named ProcessResults, which will process and write the results from the student rubric file into the main class results file. This method will be reused by the method created below to store the results on different levels of granularity, (Category, Sub-Category, Criteria and Results) in different sheets:

def ProcessResults(df_entry_raw, sheet_index, student_number, transpose=True, has_student_number=False):
    
    try:
    
        if transpose:
            
            # Transpose dataframe
            df_entry_transpose = df_entry_raw.transpose()

            # Make the Result Row the Headers
            df_entry = df_entry_transpose.rename(columns=df_entry_transpose.iloc[0])
            
            # Remove the Result Row
            df_entry = df_entry.iloc[1: , :]

            # Remove 'Mark' as the Index
            df_entry = df_entry.reset_index(drop=True)


        else:
            
            df_entry = df_entry_raw
            
        # If the student number is not provided in the dataset, assign it
        if has_student_number == False:
        
            # Add Student Number Column
            df_entry['Student Number'] = student_number

            # Move Student Number Column to the Front of the DataFrame
            cols = list(df_entry.columns)
            cols = [cols[-1]] + cols[:-1]
            df_entry = df_entry[cols]
        
        # Write to Excel if something changes, get the new columns and add it to the Results sheet
        #df_entry.to_excel('{} Results.xlsx'.format(sheet_index), index=False)
        
        # Open Results Spreadsheet
        spreadsheet = gc.open("Results")

        # Records
        results_worksheet = spreadsheet.get_worksheet(sheet_index)
        results_records = results_worksheet.get_all_values()

        results_headers = results_records.pop(0)

        # Read Results Records into DataFrame
        df_results = pd.DataFrame(results_records, columns=results_headers)

        # Check if Student Results Already Recorded
        if (df_results['Student Number'] == student_number).any():

            # Get Index of Current Student Number
            indexes = df_results.index[df_results['Student Number'] == student_number].tolist()

            for index in indexes:

              # Drop old Record
              df_results = df_results.drop(index)

        # Add Student Record
        df_results = df_results.append(df_entry)

        results_worksheet.update(
            [df_results.columns.values.tolist()] + [[vv if pd.notnull(vv) else '' for vv in ll] for ll in df_results.values.tolist()]
        )
        
        return True
        
    except Exception as e:
        
        print(e)
        
        return False

Create a method, named ProcessStudentResults, which will break up the results in different levels of granularity and parse it through ProcessResults to be written into the class results workbook:

def ProcessStudentResults(workbook):

    status = False

    results = GetData(workbook)

    df_category_rubric = results[0]
    df_sub_category_rubric = results[1]
    df_detailed_rubric = results[2]
    student_number = results[3]
    total = results[4]

    # Filter columns: Detailed Results 
    df_entry_raw = df_detailed_rubric[['Criteria','Mark']]
    # criteria table: first tab (0) in workbook
    status = ProcessResults(df_entry_raw, 0, student_number=student_number)

    # Detailed Results Processed Successfully -> Filter columns: Sub-Category Results
    if status:

        print('Detailed Results Updated')

        df_entry_raw = df_sub_category_rubric[['Sub-Category','Mark']]
        # second tab (1) in workbook
        status = ProcessResults(df_entry_raw, 1, student_number=student_number)

    else:

        print('Detailed Results NOT Updated')
      
    # Sub-Category Results Processed Successfully -> Filter columns: Category Results 
    if status:

        print('Sub-Category Results Updated')

        df_entry_raw = df_category_rubric[['Category','Mark']]
        # third tab (2) in workbook
        status = ProcessResults(df_entry_raw, 2, student_number=student_number)

    else:

        print('Sub-Category Results NOT Updated')

    # Category Results Processed Successfully -> Add Total and Display
    if status:
      
        print('Category Results Updated')

        df_entry_raw = pd.DataFrame(data=
                                  {
                                      'Student Number': [student_number], 
                                      'Total': [total]
                                  }, 
                                  columns=['Student Number','Total'])

        # Results tab (tab 3) in the file
        status = ProcessResults(df_entry_raw, 3, transpose=False, has_student_number=True, student_number=student_number)

        if status:

            print('Total Updated')

        else:

            print('Total NOT Updated')

    else:

        print('Category Results NOT Updated')def ProcessStudentResults(workbook):

    status = False

    results = GetData(workbook)

    df_category_rubric = results[0]
    df_sub_category_rubric = results[1]
    df_detailed_rubric = results[2]
    student_number = results[3]
    total = results[4]

    # Filter columns: Detailed Results 
    df_entry_raw = df_detailed_rubric[['Criteria','Mark']]
    # criteria table: first tab (0) in workbook
    status = ProcessResults(df_entry_raw, 0, student_number=student_number)

    # Detailed Results Processed Successfully -> Filter columns: Sub-Category Results
    if status:

        print('Detailed Results Updated')

        df_entry_raw = df_sub_category_rubric[['Sub-Category','Mark']]
        # second tab (1) in workbook
        status = ProcessResults(df_entry_raw, 1, student_number=student_number)

    else:

        print('Detailed Results NOT Updated')
      
    # Sub-Category Results Processed Successfully -> Filter columns: Category Results 
    if status:

        print('Sub-Category Results Updated')

        df_entry_raw = df_category_rubric[['Category','Mark']]
        # third tab (2) in workbook
        status = ProcessResults(df_entry_raw, 2, student_number=student_number)

    else:

        print('Sub-Category Results NOT Updated')

    # Category Results Processed Successfully -> Add Total and Display
    if status:
      
        print('Category Results Updated')

        df_entry_raw = pd.DataFrame(data=
                                  {
                                      'Student Number': [student_number], 
                                      'Total': [total]
                                  }, 
                                  columns=['Student Number','Total'])

        # Results tab (tab 3) in the file
        status = ProcessResults(df_entry_raw, 3, transpose=False, has_student_number=True, student_number=student_number)

        if status:

            print('Total Updated')

        else:

            print('Total NOT Updated')

    else:

        print('Category Results NOT Updated')

For each student file, you’re going to want to execute ProcessStudentResults. Replace ‘<YOUR FOLDER ID> with the folder ID that contains all student rubrics:

# View all folders and file in your Google Drive
file_list = drive.ListFile({'q': "'<YOUR FOLDER ID>' in parents and trashed=false"}).GetList()

for file in file_list:
    
    if 'rubric' in file['title'].lower() and 'template' not in file['title'].lower(): 

        print(file['title'])

        ProcessStudentResults(file['title'])

        print('\n')

Did this work for you? Did you find any enhancements to the code that could be implemented? Feel free to fork the code on GitHub and make pull requests with any enhancements.

Want to visualise the data gathered from the rubrics? Here’s a blog post on how you can connect a Google Sheet to Power BI.

If you have any feedback, or if you got stuck somewhere, feel free to reach out via email (jacqui.jm77@gmail.com) or drop a comment below.

Featured

[Automation] Retrieve Power Automate Flow Run Logs and Visualise in Power BI

Level of Difficulty: Intermediate – Senior.

This post will focus on pulling the Power Automate flow run logs, using PowerShell, exporting them to a CSV file and visualising them in Power BI

What are the steps?

The steps that we will be following are:

  1. Setup PowerShell
  2. Get the data
  3. Visualise in Power BI

Setup PowerShell

You’ll need to install the PowerApps PowerShell modules which you could do by running the following commands as Administrator in PowerShell:

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell
Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber

Next, you’ll need to authenticate by running the Add-PowerAppsAccount command in PowerShell:

Add-PowerAppsAccount

If you’d like to execute the command without the prompt, you can use the following instead:

$pass = ConvertTo-SecureString "password" -AsPlainText -Force
Add-PowerAppsAccount -Username foo@bar.com -Password $pass

Get the Data

In order to find a flow ID (which you’ll need to export the logs, using the next query), you’ll need to dissect the URL. Navigate to the desired flow on the web version of Power Automate and retrieve the URL. This is what you’d like to navigate to, to get the URL break down below:

emea.flow.microsoft.com/manage/environments/Default-<EnvironmentID>/flows/<FlowID>/details?backUrl=%2Fflows%2F<FlowID>%2Fruns

From the above, you would like to copy the portion that would depict the Flow ID of the flow that you are looking at. Then, execute the following command in PowerShell, replacing <FlowID> with the ID you’ve retrieved:

Get-FlowRun <FlowID> | Export-Csv -Path '.\FlowExport.csv'

Visualise Data in Power BI

Open Power BI Desktop > Get Data > txt or csv. Then select the FlowExport.csv that was just generated from PowerShell. Then hit Transform Data and apply the following steps:

  1. Remove the top row (which contains TYPE System.Management.Automation.PSCustomObject)
  2. Use First Row as Headers
  3. Change StartTime column to Date/Time
  4. Split Internal by semi-colon, for each occurrence of the delimiter
  5. Remove all Internal columns EXCEPT Internal.2 (which should start with ‘id=/’)
  6. Rename Internal.2 to Internal
  7. Replace ‘id=/providers/Microsoft.ProcessSimple/‘ with ‘https://emea.flow.microsoft.com/manage/’ in Internal
    • This is so that more detail can be obtained on failed runs by navigating to the URL from the report

The dataset should look similar to this before hitting Close & Apply:

Now you can visualise your logs on Power BI, like by adding a Clustered Column Chart visualisation to the page:

Need help? Feel free to drop a comment below or reach out to me via email, jacqui.jm77@gmail.com

Featured

[RPA] Automating Azure DevOps Project, Repository and Work Items Creation through UiPath

Level of Difficulty: Beginner – Intermediate.

This post will assist in automating the creation of projects, repositories and work items in Azure DevOps through UiPath by using the Azure DevOps API.

There are multiple Azure DevOps libraries available on the UiPath Marketplace that make the API calls for you but the aim of this post is to enable developers through understanding how the Azure DevOps API works so that they can make calls that other existing libraries might not cater for.

The Azure DevOps API is documented here.

What are the steps?

The steps that we will be following are:

  1. Create an Azure DevOps Personal Access Token
  2. Create API Body Templates
  3. Create a ‘Create Project’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response
  4. Create a ‘Create Repository’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response
  5. Create a ‘Create Work Item’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response
  6. Create a ‘Link Work Item’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response

Deep Dive

Let’s dive deeper into the steps listed above.

Create an Azure DevOps Personal Access Token

Login to Azure DevOps, which you can navigate to from here.

In the top right hand corner of the screen, under User Settings you can access the Personal access tokens option.

Azure DevOps Personal Access Token Generation

Select New Token:

New Personal Access Token

Give your access token a name, select an organisation if you’d like to only grant permission to a specific organisation and set the expiration date of the token. Ensure that you update the personal access token once it has expired.

New Azure DevOps Personal Access Token

Once you’ve clicked Create, copy the personal access token somewhere safe as we will need it when making the API call.

Create API Body Templates

The templates may be stored in a ‘Templates‘ folder within the UiPath solution. We will need two templates, one for projects and one for repositories. These templates hold the JSON template that is needed for the API request.

The ‘new_project_template.json‘ file should contain the following:

{
	"name": "<project_name>",
	"description": "<project_description>",
	"capabilities": {
		"versioncontrol": {
		  "sourceControlType": "Git"
		},
		"processTemplate": {
		  "templateTypeId": "adcc42ab-9882-485e-a3ed-7678f01f66bc"
		}
	}
}

In UiPath we will need to replace ‘<project_name>‘ and ‘<project_description>‘.

The ‘new_repository_template.json‘ file should contain the following:

{
	"name": "<repository_name>",
	"project": {
		"id": "<project_id>"
	}
}

In UiPath we will need to replace ‘<repository_name>‘ and ‘<project_id>‘.

The ‘new_workitem_template.json‘ file should contain the following:

[
	{
		"op": "add",
		"path": "/fields/System.Title",
		"from": null, 
		"value": "<story>"
	},
	{
		"op": "add",
		"path": "/fields/System.AreaPath",
		"from": null, 
		"value": "<project>"
	},
	{
		"op": "add", 
		"path": "/fields/System.IterationPath", 
		"from": null, 
		"value": "<project>\\Iteration 1"
	}, 
	{
		"op": "add",
		"path": "/fields/System.State",
		"from": null,
		"value": "<story_state>"
	}
]

In UiPath we will need to replace ‘<story>‘, ‘<project>‘ and ‘<story_state>‘.

The ‘link_workitem_template.json‘ file should contain the following:

[
	{
		"op": "test",
		"path": "/rev",
		"value": 1
	},
	{
		"op": "add",
		"path": "/relations/-",
		"value": {
			"rel": "System.LinkTypes.Hierarchy-Reverse",
			"url": "https://dev.azure.com/<organisation>/_apis/wit/workItems/<story_id>",
			"attributes": {
				"comment": "<comment>"
			}
		}
	},
	{
		"op": "add",
		"path": "/fields/System.State",
		"from": null,
		"value": "<story_state>"
	}
]

In UiPath we will need to replace ‘<comment>‘, ‘<story_id>‘ and ‘<story_state>‘.

UiPath ‘Create Project’ Workflow

Create a folder in the UiPath solution folder named Projects and within the folder, create a ‘CreateProject.xaml‘ workflow:

Let’s start by adding a ‘Read Text File‘ activity and assigning a few variables:

The NewProjectURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/_apis/projects?api-version=6.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps.

Next, add a Multiple Assign activity to replace the project name and project description placeholder with the appropriate names:

Make sure that the UiPath.Web.Activities library is installed before trying to add the HTTP Request Activity:

The properties of the HTTP Request activity should include the following:

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Create Project’ sequence should look like this:

UiPath ‘Create Repository’ Workflow

Create a folder in the UiPath solution folder named Repositories and within the folder, create a ‘CreateRepository.xaml‘ workflow:

Let’s start by adding a ‘Read Text File‘ activity and assigning a few variables:

The NewRepositoryURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/” + ProjectID + “/_apis/git/repositories?api-version=6.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps and the Project ID should correspond to the ID of the project in Azure DevOps.

Next, add a Multiple Assign activity to replace the repository name and project ID placeholder with the appropriate names:

Make sure that the UiPath.Web.Activities library is installed before trying to add the HTTP Request Activity:

The properties of the HTTP Request activity should include the following:

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Create Repository’ sequence should look like this:

The process of creating the Work Items workflows is very similar to the two mentioned above. Therefore, I’ll go through the next two workflows in a little less detail.

UiPath ‘Create Work Item’ Workflow

Create a folder in the UiPath solution folder named Work Items and within the folder, create a ‘CreateWorkItem.xaml‘ workflow:

Let’s start by adding a ‘Read Text File‘ activity and assigning a few variables:

The NewWorkItemURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/” + ProjectID + “/_apis/wit/workitems/$” + Type + “?api-version=6.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps and the Project ID should correspond to the ID of the project in Azure DevOps.

Next, add a Multiple Assign activity to replace the story name, story state and project name placeholder with the appropriate names:

Similar to the Create Project and Create Repository sections above, add a HTTP request activity with the following properties:

Please note that the body format is application/json-patch+json.

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Create Work Items’ sequence should look like this:

UiPath ‘Link Work Item’ Workflow

Create a workflow named folder ‘LinkWorkItem.xaml‘ in the folder named Work Items. and add a ‘Read Text File‘ activity and assigning a few variables:

The LinkWorkItemURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/” + ProjectID + “/_apis/wit/workitems/” + TaskID + “?api-version=5.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps and the Project ID should correspond to the ID of the project in Azure DevOps. The Task ID should be the ID of the Task you would like to link to the story.

Next, add a Multiple Assign activity to replace the story name, story state and organisation name placeholder with the appropriate names:

Similar to the Create Project and Create Repository sections above, add a HTTP request activity with the following properties:

Please note that the body format is application/json-patch+json.

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Link Work Items’ sequence should look like this:

The above solution is available on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

Featured

[RPA] Using SQL User-Defined Table Types in UiPath as a Component of a Data Exchange Architecture

Level of Difficulty: Intermediate – Senior.

This post will assist in understanding what SQL user-defined table types (UDTTs) are and how they can be used within UiPath as a part of a Data Exchange Architecture (DEA). Following explanations of UDTTs and DEA approaches, we will do a practical implementation walkthrough.

User-Defined Table Types (UDTTs)

User-defined table types are table schema definitions that may be created by users in order to store temporary data. In essence, this means that you can send a table through to a stored procedure as a parameter, the same way you would an integer or string value, without having to write the data to a table first. The benefit of this is that you don’t need to store data on a permanent medium in order to perform SQL queries on the data (like joins or upsert merges, for example).

Data Exchange Architecture (DEA)

A Data Exchange Architecture component, in this context, refers to the use of Extract Transform Load (ETL) or Extract Load Transform (ELT) in order to maintain a constant and usable state of reliable data at any point in time. This can be implemented in many different ways, depending on the existing data architecture within your environment.

One way of implementing ETL would be to create three database schemas, each responsible for its own phase of ETL:

  1. Landing: (Extract)
    • All raw data is landed in the relevant tables within this schema. No changes are to be made to this data (possibly done through a truncate and load process)
  2. Staging: (Transform)
    • Data from landing tables are used to populate staging tables and data is cleaned up (possibly done either using truncate and load or using upsert merge statements)
  3. Loading: (Load)
    • Data from staging tables are transformed according to the business rules and populated into the tables within this schema (usually done using upsert merge statements)

Another common approach is to combine Landing and Staging into one schema (usually Staging). The idea behind this approach is that the Staging tables should be the intermediate storage between data sources and data targets.

Use-Case

Let’s work through a practical example together. We have two datasets: an Excel file (which is a few hunderd records large) containing Azure Devops project information and a SQL table (which is a few thousand records large) containing additional project information, like who the project manager is, what the project budget is, etc. For the sake of context, not all of the projects are recorded in the Azure Devops dataset as not all projects are currently active. The SQL table contains historical project data too.

We DO want to:

We would like to join the data that exists within in Excel with the data in SQL in order to enrich our dataset needed in our UiPath process.

We DO NOT want to:

We do not want to query the Excel and SQL data into UiPath datatables and do the joins in UiPath due to the way that the UiPath “Join Datatables” activity iterates through each record in each datatable to find the join. Considering the amount of data that we are working with, the process would perform poorly in terms of time. SQL is really good at processing data quickly, so it would be a good idea to parse the Excel data into SQL but we don’t want to create a table for it if we don’t need to.

So we…

We do stuff… In SQL and in UiPath.

In SQL

We use UDTTs and stored procedures. We can create a table schema definition for the Excel file in SQL (ID, name, description and URL) as a type. The UDTT create statement would look something like:

CREATE TYPE [Project_UDTT] AS TABLE 
(
	[id] [uniqueidentifier] NOT NULL,
	[name] [varchar](255) NOT NULL,
	[description] [varchar](max) NULL,
	[url] [varchar](max) NOT NULL
);
GO

Next, we would need a stored procedure to parse the UDTT through and retrieve the joined results, which would look something like:

CREATE PROCEDURE [JoinProjectInformation_SP]
(
	@Projects [Project_UDTT] READONLY
)
AS
BEGIN
	SET NOCOUNT ON;

	SELECT		p.* -- all columns in @Projects
				,pi.* -- all columns in ProjectInformation

	FROM		@Projects p
	JOIN		ProjectInformation pi
	ON			p.id = pi.ProjectID
END;
GO

Now we go do more cool stuff in UiPath.

In UiPath

We will need to add an Execute Query activity (Execute Non-Query would work more or less the same, if that’s the sort of thing that you’re into) to the desired sequence. If you can’t find the Execute Query activity, make sure that you have the UiPath.Database.Activities package installed.

User-Defined Table Types (UDTTs) in UiPath

First, we need add the stored procedure name (“JoinProjectInformation_SP”) to the SQL input argument as well as change the CommandType to StoredProcedure. Assign the output Datatable argument to something along the lines of dtJoinedProjects.

Next, we need to add the results (of Datatable type) from “Get Projects” as a paramter to the stored procedure. We do that by configuring Parameters:

Ensure that the Parameter Name is the same as it is declared in the SQL stored procedure but without the “@” prefix. Once you’re done, select OK and run the workflow.

Did this work for you? Is there something you’d like to add? Drop a comment below or reach out to me jacqui.jm77@gmail.com.

Featured

[Automation] Monitoring Gmail Inbox Using Python

Level of Difficulty: Beginner – Senior.

Many automation solutions make use of the functionality provided by mail services as it serves as an important element that allows for communication between humans and the automation process. There are many benefits provided by using Google Mail (Gmail), one of which is cost – for that reason, this post will focus on providing a step-by-step guide of how to monitor emails coming into your Gmail inbox, with the ability to monitor specific labels.

It is also important to note that there are tools and platforms that make it much easier to perform these actions but as developers, we know that life cannot always be “easy”. This post aims at empowering the “not easy” solutions.

What are the steps?

The steps that we will be following are:

  1. Ensure that your Gmail account is configured correctly
  2. Import the libraries
  3. Gather variable values
    1. Email address
    2. Password
    3. Label
    4. Search Criteria
  4. Define methods
    1. Get Body
    2. Search
    3. Get Emails
    4. Authenticate
  5. Authenticate
  6. Extract mails
  7. Extract relevant information from results

Deep Dive

Let’s dive deeper into the steps listed above.

Ensure that your Gmail account is configured correctly

My first few attempts at this left me pulling my hair out with an “Invalid Credentials” error. Upon much Googling and further investigation, I found that it is caused by a Google Account setting. This is quite easily fixable.

In order to interact with my account, I had to allow less secure apps (you can access that setting here):

Allowing less secure apps to communicate with my Gmail account

If you are still experiencing problems, here is a more extensive list of troubleshooting tips.

Import the libraries

Now let’s move over to Python and start scripting!

First, let’s import the libraries that we’ll need:

import imaplib, email

Gather variable values

In order to access the mails from the Gmail account we will need to know the answers to the following questions:

  1. Which Google account (or email address) do we want to monitor?
  2. What is the password to the above account?
  3. What label do we want to monitor?
  4. What is the search criteria?

The best way to find out is to ask and luckily we can do that through code:

imap_url = 'imap.gmail.com' # This is static. We don't ask the questions we know the answer to

user = input("Please enter your email address: ")
password = input("Please enter your password: ")
label = input("Please enter the label that you'd like to search: ") # Example: Inbox or Social
search_criteria = input("Please enter the subject search criteria: ")

Define Methods

It becomes easier to break some of the reusable elements up into methods (or functions) so that larger implementations of this solution are equipped to be easily scalable. Stephen Covey teaches us that starting with the end in mind is one of the habits of highly effective people – some might even call it proactive design thinking. The point is that it is good to think ahead when developing a solution.

Enough rambling, here are the functions:

# Retrieves email content
def get_body(message): 
    if message.is_multipart(): 
        return get_body(message.get_payload(0)) 
    else: 
        return message.get_payload(None, True) 
    
# Search mailbox (or label) for a key value pair
def search(key, value, con):  
    result, data = con.search(None, key, '"{}"'.format(value)) 
    return data 
  
# Retrieve the list of emails that meet the search criteria
def get_emails(result_bytes): 
    messages = [] # all the email data are pushed inside an array 
    for num in result_bytes[0].split(): 
        typ, data = con.fetch(num, '(RFC822)') 
        messages.aplend(data) 
  
    return messages 

# Authenticate
def authenticate(imap_url, user, password, label):
    
    # SSL connnection with Gmail 
    con = imaplib.IMAP4_SSL(imap_url)  

    # Authenticate the user through login
    con.login(user, password)  

    # Search for mails under this label
    con.select(label)

Authenticate

Before we can extract mails, we first need to call the authenticate method that we had just created and pass through the answers to the questions we asked further up:

authenticate(imap_url, user, password, label)

Extract mails

Next, we need to call the search and get_mails methods to extract the mails:

# Retrieve mails
search_results = search('Subject', search_criteria, con)
messages = get_emails(searhc_results) 
  
# Uncomment to view the mail results
#print(message

Extract relevant information from results

Now, let’s work through the results and extract the subject using string manipulation. Feel free to add a “print(subject)” statement underneath the assignment of “subject” for debugging purposes:

for message in messages[::-1]:  
        
    for content in message: 
                       
        if type(content) is tuple:  
  
            # Encoding set as utf-8 
            decoded_content = str(content[1], 'utf-8')  
            data = str(decoded_content) 
            
            # Extracting the subject from the mail content
            subject = data.split('Subject: ')[1].split('Mime-Version')[0]
                                      
            # Handling errors related to unicodenecode 
            try:  
                indexstart = data.find("ltr") 
                
                data2 = data[indexstart + 5: len(data)] 
                
                indexend = data2.find("</div>") 
                
                # Uncomment to see what the content looks like
                #print(data2[0: indexend]) 
  
            except UnicodeEncodeError as e: 
                pass


Did this work for you? Feel free to drop a comment below or reach out to me through email, jacqui.jm77@gmail.com.

The full Python script is available on Github here.

Featured

[Automation] Working With The UiPath Automation Hub API And Postman (Then Getting The Code)

Level of Difficulty: Beginner – Senior.

Does your organisation use Automation Hub to capture and consolidate automation ideas and collateral? Have you ever wanted to interact with the data you have in Automation Hub in an automated manner? Well UiPath makes that easier now with the Automation Hub API – no more front-end automations needed to access your data.

Here’s how it works. If you’re looking for specific queries that aren’t covered in this blog post, checkout this Postman collection.

What are the steps?

The steps that we will be following are:

  1. Identifying the URL
  2. Compiling and populating the bearer token
  3. Adding the necessary headers
    1. x-ah-openapi-auth = openapi-token
    2. x-ah-openapi-app-key (only if you’ve assigned the app key when generating the token)
  4. Grabbing the results
  5. Generating the code for reuse in an automation

Deep Dive

Let’s dive deeper into the steps listed above.

Identify the URL

In order to identify which URL (or API Call) would achieve the task at hand, take a look at the different API Calls available here.

For the sake of this post, we are going to list all automations in our instance. Thus, we will be using the following API call:

https://automation-hub.uipath.com/api/v1/openapi/automations

Pop this URL into Postman:

Compile and Populate The Bearer Token

First thing’s first. Make sure Open API is set up on your tenant. You can do that as follows:

  1. Navigate to the Admin Console within Automation Hub
  2. Hover over Platform Setup
  3. Select Open API

Next, you’re going to want to hit Generate Token and enter the necessary details:

You’re also going to want to take note of your tenant ID because that’s what we are going to use to compile the Bearer Token:

The Bearer Token is made up by concatenating your tenant ID and your generated token separated by “/”.

An example is: 
46b6c342-3ab4-11e9-9c19-37a5980a67e8/ce91aa04-fc61-49e9-bec5-cb237efb4bda where:

  • 46b6c342-3ab4-11e9-9c19-37a5980a67e8 is the unique Tenant ID
  • ce91aa04-fc61-49e9-bec5-cb237efb4bda is the specific token generated for the user account

Add the Bearer Token under the “Authorization” tab in Postman with the Type set to “Bearer Token“:

Add Headers

If you have set up app key as an extra security measure when you generated the token, you’ll need to add “x-ah-openapi-app-key” to your headers and assign it to the value you created.

Regardless of whether you populated the app key or not, you’ll need to add “x-ah-openapi-auth” to your headers and assign it to “openapi-token“:

Grab The Response

Once you’ve hit send in Postman, you make a sacrifice to the universe and wait for your results which should look something liiiiikkkkeeee this:

Generate Code For Automation

Now that you’re getting results, you’ll most likely want to get this automated. Well then let’s get the code (for whatever language we want) from Postman.

Click on code in the top(ish) right-hand corner in Postman:

Select your language then copy and paste the code:

Did you get stuck anywhere? Was this helpful?

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

Featured

[Data Visualisation] Creating a Hamburger Menu for Filters in Power BI

Level of Difficulty: Beginner – Senior.

Often we create Power BI reports that require some sort of filtering mechanism but filters take up a lot of real-estate that could be used for other visuals instead. What if we could hide and show a filter pane using a “hamburger” menu mechanism?

We can and here’s how.

What are the steps?

The steps to using a hamburger menu to hide and show a filter pane in Power BI are:

  1. Create the report layout and set up the filters
  2. Add two hamburger menu images
  3. Create a “hide” bookmark
  4. Create a “show” bookmark
  5. Add the bookmarks to the actions of each image
  6. Align the two images

Create the report layout and set up the filters

To create the filters panel, add a rectangle (Home > Insert > Shapes) before adding and aligning the filter visuals on top of the rectangle.

An example would be something like this:

Add two hamburger menu images

Next we would want to add two hamburger menu icons (preferably SVG or PNG images with transparent backgrounds) next to one another (Home > Insert > Image).

Create a hide bookmark

In order to create a bookmark, you would need to ensure that the bookmark pane and selection pane are visible. You can do this by navigating to the View tab and ensuring that the Bookmarks Pane and the Selection Pane are both checked. This should allow you to see the following:

To create a hide bookmark you would need to hide all of the filters, the rectangle block and one of the hamburger menu images using the Selection Pane. To hide a visual (or an element), you can either click on the eye icon next to the visual in the selection pane or you can click on the element on the report and select hide on the selection pane.

Once all necessary visuals have been hidden, you should see this:

Next, you are going to want to bookmark the view that you are currently looking at by selecting “Add” in the Bookmarks Pane. This will result in “Bookmark 1” being created:

You can then rename the bookmark to “Hide Filters” by double clicking on the bookmark or by selecting the three dots next to the bookmark name (on the right) and selecting Rename:

Create a show bookmark

To create a “Show” bookmark, we are going to ensure that all of our filters are visible again:

Next we are going to hide the hamburger image that was visible in the “Hide” bookmark:

Then select “Add” in the Bookmark Pane and rename the bookmark to “Show Filters“:

Adding the bookmarks to the actions of each image

Now we need to add these bookmarks as actions to the correct hamburger image. Let’s start with the image that’s still visible. When we click that image, we are expecting our filters to Hide. So we want to link this image to the “Hide Filters” bookmark.

To do this, click on the image, navigate to the Format Image pane, ensure that Action is On (if it is Off, click on the round dot and it will turn on), expand Action, change Type to “Bookmark” and select the “Hide Filters” bookmark:

If you hover over the visible image, there should be a tool tip that appears:

If you hold Ctrl and click on the image, it will apply the bookmark and the filters (with its hamburger menu image) should disappear.

Now let’s repeat these steps for the image that is currently visible and assign the “Show Filters” bookmark to its Action:

Now you can lay the one hamburger image on top of the other the other so that they appear to be one image (you may need both images to be visible for this). Reorganise your report layout and play around with the other fancy things that you can do with bookmarks!

Just a note: It is possible to keep values of filters between bookmarks. It would require manipulation of bookmark properties. For this scenario, the data option would need to be deselected:

This report is available on GitHub, here.

If you got stuck or have feedback, please feel free to drop a comment below or reach out to jacqui.jm77@gmail.com for help!

Featured

[RPA] Uploading Files to Azure Blob Storage in UiPath Processes

Level of Difficulty: Intermediate – Senior.

This post will assist in using Azure Blob Storage to store files used in UiPath processes.

A funny thing happened the other day… Jackson broke the news to the team that his laptop was stolen. After some panic, he lets everyone know that at least the latest solution was checked into source control (and is available on the Orchestrator). That brings some relief, until Barbra asks if the look up files are in source control too. With a worried look on his face, Jackson says, “No… Neither are any of the reference files that are referenced in the queue item.”

After much excitement, Jeremy (the boss man), mandates a central storage repository to be implemented so that this doesn’t happen in the future and so that local development isn’t relied upon. After some investigation, the team decided that Azure Storage would be a great way to go as it fits in with the general architectural direction that the company is heading in. Here’s the question though, how?

What are the steps?

The steps that we will be following are:

  1. Create an App Registration
  2. Assign the correct API permissions
  3. Gather the necessary info
    1. Application ID
    2. Tenant ID
    3. Client Secret
  4. Create Azure Storage Resource
  5. Use the created resources in UiPath
    1. Add Azure Scope
    2. Add Get Storage Account Activity
    3. Add Get Storage Account Key Activity
    4. Add Get Blob Container
    5. Add Upload Blob

Deep Dive

Let’s dive deeper into the steps listed above.

Create an App Registration

Once you have logged into Azure (https://portal.azure.com/), the first thing you want to do is create an App in App Registration and you can do that by doing the following:

  1. Go to Azure Active Directory
  2. Go to App Registrations
  3. Click Register an application
  4. Provide a name and select authentication
  5. Hit Register

Next, you want to add the correct API permissions.

Assigning The Correct API Permissions

You will need to do the following to assign the correct API permissions:

  1. Inside the newly created App Registration, select API permissions
  2. Select Add a permission
  3. Add Azure Storage, user_impersonation

Now that you’ve got that set up, you want to get the Application ID, Tenant ID and Client Secret.

Gather The Necessary Info

In order to use the app that has just been registered, you’ll need to collect certain info. The info can be accessed as follows:

  1. Inside the newly created App Registration, select Overview
  2. Copy the Application (client) ID and the Directory (tenant) ID as it will be needed later
  3. Click on Client secrets, generate a secret
  4. Copy the secret and paste it somewhere for when you need it
  5. Click on Overview in the Azure Active Directory and copy the Tenant ID
  6. The Subscription ID will be visible in the Overview section of the Storage Account (once it has been created)

Now you should be ready to create your storage account.

Creating Azure Storage Resource

Hit Home in the top left hand corner and navigate to Resources.

  1. Hit new resource
  2. Search for Storage account
  3. Click Storage account – blob, file, table, queue
  4. Click Create
  5. Select Subscription and resource group (create one if necessary)
  6. Select storage account name, location, account kind (BlobStorage) and Replication (Read-access geo-redundant storage (RA-GRS))

Create Container

Great! Storage account created – now to create a container within the storage account:

  1. Open newly created Storage Account resource
  2. Click on Containers
  3. Click Add Container
  4. Give it a name and access level

Storage account created, container created, but how does the App Registration tie in? Well, we still need to add it, so let’s do that.

Give the App Registration access to the Storage Account

In order to give the Registered App access to the newly created Storage Account, do the following:

  1. Go to the Storage Account
  2. Select Access Control (IAM)
  3. Select Add, Role Assignment
  4. Assign Role to Storage Blob Data Contributor
  5. Search for the App Registration in the Select field
  6. Now add another Role and assign the App Registration to the Contributor Role

Now that we have everything set up in Azure, let’s get started in UiPath.

Using Azure Storage in UiPath

In order to use the resources created on Azure, the following needs to be done:

  1. Create a new process and give it a name
  2. Add the UiPath.Azure.Activities library
  3. Add the Azure Scope to the sequence
  4. Enter the following details gathered from above:
    1. ClientID
    2. ClientSecret
    3. SubscriptionID
    4. TenantID
  5. Add Get Storage Account Activity inside of the Azure Scope, with the following properties
    1. StorageAccountName
    2. ResourceGroupName
    3. Output StorageAccount to a variable
  6. Add Get Storage Account Key Activity below, with the following properties
    1. StorageAccount
    2. Output StorageAccountKey to a variable
  7. Add Get Blob Container with the following properties
    1. ContainerName
    2. StorageAccount
    3. StorageAccountKey
    4. Output BlobContainer to a variable
  8. Add Upload Blob from File with the following properties
    1. BlobContainer
    2. BlobName
    3. ContentType
    4. FilePath
    5. Type
    6. Output Blob to a variable

Now Jeremy, Jackson and the rest of the team can go about their business without worrying about stolen laptops and lost files.

The above solution is available on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

[UiPath] Creating a UiPath Data Services Entity Record From UiPath Studio

Level of Difficulty: Junior – Senior.

I recently started playing with integrating UiPath Apps, UiPath Data Services and UiPath Storage Buckets into my automations. It has been a tremendous amount of fun to see it all come together and how it all gels together. I had a few challenges creating records in UiPath Data Services Entities from UiPath but I managed to figure it out and it turns out to be quite simple.

Enabling UiPath Data Services

If you cannot see the Data Services icon on the navigation menu located on the left hand side of the screen, ensure that Data Services is enabled on your tenant by directing to Admin, expanding your tenant and selecting Data Services.

Create the Entity in UiPath Data Services

Navigate to the UiPath Data Services through the UiPath Automation Cloud and Create New Entity:

Complete the relevant fields before clicking Save at the bottom right hand side of the page:

Click on the entity and add the appropriate fields:

Now navigate to UiPath Studio to connect the data services and start populating the entity.

Populating the Entity From UiPath Studio

First thing’s first… Install the Data Services library by clicking on Manage Packages in the taskbar. Search for the library, install and save.

Before we can populate the entity, we first need to connect the entity to Studio. You can do this by clicking on Manage Entities in the taskbar – check the entity you want to add and click Save:

The entity will appear in under Entities in your Project Explorer:

Create a new variable in the Variables pane named ‘InstrumentEntity’. You will need to reassign the variable type to the Instrument entity. Click on Browse for more types in the variable type drop down:

Assign the default value of the newly created variable, InstrumentEntity, to a new instance of the Instrument entity:

Assign the attributes of the entity to values individually:

Drag the Create Entity Record activity below the assignment and configure the properties to create the record in the entity:

Run the file and test that the process populates the entity before further customising your process:

The sample code is available in this GitHub repo.

[UiPath] Task Capture Good Practices

Level of Difficulty: Junior – Senior.

UiPath Task Capture is a tool that exists within the UiPath product offering. It has been tagged as a “Prepare Automation” type tool which assists in mapping out a process in terms of process steps, screenshots, actions and descriptions. The UiPath Task Capture Document can be exported as a diagram image, a Process Definition Document (PDD) or a UiPath Studio xaml solution file.

Getting Started With UiPath Task Capture

Before getting started with UiPath Task Capture, take a quick browse through the UiPath Task Capture docs. If you haven’t yet done so, download UiPath Task Capture.

So it’s clearly a great tool that could be quite beneficial to process understanding and automation preparation. But what are some of the good practises that can be implemented?

UiPath Task Capture Good Practises

There are a few things that you should focus on when working on Task Capture documents.

Add swimlanes

Swim lanes are a really great way to separate out ownership, accountability and/or responsibility. Whether it be systems, people, processes or even technology, swim lanes can be used to show separation in one overall process.

Implementing swim lanes is a great way to visualise where the most complexity in a process occurs. It can assist in dividing processes up into phases and identifying where change management should occur.

To add a swim lane, hit Edit > Insert swimlane and then select either Horizontal or Vertical.

Insert swimlanes in UiPath Task Capture

Define business exceptions

Business rules and business exceptions are a very important part of a process. They are crucial in determining the success of a process and require a lot of consideration when designing and architecting a TO BE process. For this reason, I like to denote process steps that relate to business exceptions by adding “BE:” as a prefix to the title (alternatively you could use BR as well, just remain consistent with your choice):

Denoting business exceptions in UiPath Task Capture using “BE: “

Pay attention to titles and descriptions

It is important to remain cognisant of describing what a process step is trying to achieve through appropriate naming of an action’s title. It is a common problem when capturing a process that the titles are not very descriptive or intuitive. For this reason, it is important to go through each action, critically, once you’ve recorded the process and double check that the title and description accurately represent the action that is being executed. The naming has a knock-on effect on the UiPath Studio solution layout.

Add titles and descriptions to UiPath Task Capture actions

Group steps logically

It is important to think about how steps should be grouped when building out the flow in Task Capture. The layout of the flow created in the Task Capture document becomes the layout created in UiPath Studio when opening the xaml file. For this reason, it is very important to remain conscious and attentive to how that flow could be grouped, possibly into reusable components.

Once the solution is exported from Task Capture, a design pattern may be implemented and some additional work could be done to fit the layout into existing templates. By focusing on the smaller aspects, at a process mapping level, you are avoiding significant rework from a PDD level and a xaml solution level.

Pay attention to the PDD template

UiPath Task Capture allows you to choose a PDD template which might have your company or organisation’s specific branding on. In order to create a custom template, download the existing UiPath or Automation Hub PDD template, adjust the document based on your branding themes and styles then upload the customised PDD to UiPath Task Capture when Exporting to Word. Select the template and press Export.

Exporting UiPath Task Capture Document to

Pay attention to UiPath Studio solution templates

Many organisations and development teams put together a UiPath Studio solution template that meets their organisational standards, governance and policies. The export from Task Capture would need to be applied within that template solution so keep it in mind when you are working on the Task Capture document.

Do you have any other good practises you’d like to add? Drop a comment below.

[UiPath] Uploading Files to UiPath Storage Buckets

Level of Difficulty: Junior – Senior.

I recently started playing with integrating UiPath Apps, UiPath Data Services and UiPath Storage Buckets into my automations. It has been a tremendous amount of fun to see it all come together and how it all gels together.

One of my biggest frustrations, however, was getting my file uploads to the storage buckets to work. I kept getting a “403 Forbidden” error regardless of how many times I changed my properties. After trolling through the forums, I found that it is an access issue. My user on orchestrator did not have any storage bucket permissions assigned to it. I wrote this post which explains how to create a new storage bucket role and assign it to a user.

Once the permissions and access have been sorted, we can get to uploading a file to storage buckets and here’s how.

Create the Storage Bucket in UiPath Orchestrator

Remember, storage buckets are folder based, so you’ll need to navigate to the appropriate Folder that you’d like to create the storage bucket in before navigating to Storage Buckets and clicking Add storage bucket:

You have an option to create a storage bucket on orchestrator (the option I’m going with for this example), connect to Azure or AWS. Provide your storage bucket information and click Add:

Upload Files From UiPath Studio

Now navigate to UiPath Studio where you can create a new process or library to store the file upload. If this is a function you’ll use often, across automations, consider creating a library.

Don’t forget to work through the UiPath docs as well, it has some really useful info about what each of the properties represents or expects.

This is literally all you need to configure:

Give it a whirl and drop a comment below if you get stuck. This basic template is available on this GitHub repo.

[Automation] GitHub (and Enterprise) API Versus Azure DevOps API

Level of Difficulty: Intermediate – Senior.

There are so many really good and useful resources made available on how to use the GitHub API. A lot of work has gone into integrating Azure DevOps and GitHub into automation projects and this is done through their APIs. I have found certain Azure DevOps functions to be slightly behind the level of GitHub documentation on certain API calls, especially pertaining to pull requests.

GitHub Versus Azure DevOps – Structure

The way that projects and repositories are structured between the technologies is also vastly different. DevOps uses a top down approach where you need to start on an organisational level before creating a project and once you’re on a project level, you can create repositories, boards, etc.

GitHub, on the other hand, does not depict a starting point. You could create a project first and then a repository or you could do it the other way around. The GitHub way of doing things allows a lot more flexibility, which is great, but if you want to include it in an automation environment, there needs to be some form of standard operating model around exactly how GitHub ought to be used so that it does not break the process flow – which is the very essence of a successful automation.

GitHub Versus Azure DevOps – Cost

There are many different cost models available which may very well be subject to change in the future. GitHub allows developers to use their platform without cost with very few limits while larger organisations are offered an Enterprise version of the platform with more features catering for large-scale needs.

The Azure DevOps model looks slightly more different with less “free” and “limitless” access but with good reason. The Azure DevOps offering has a lot of functionality built into it which also motivates the different licensing models available for it. It does however, not do much to encourage developers to use the service extensively in a personal capacity, not when there are more freely available platforms available. Azure DevOps is quite well suited for larger organisations.

GitHub Versus Azure DevOps – The APIs

There are multiple libraries available for implementing and integrating Azure DevOps into UiPath and other automation projects. One of mine is available here. There are currently no libraries that explore the deep capabilities of the GitHub API, in library format, available within UiPath. I therefore, decided to create my own Postman collection which I hope to turn into more useful UiPath implementations.

[Python] Scraping Context (Categories, Topics and Entities) From Websites Using TextRazor and Python

Level of Difficulty: Beginner – Senior.

Say you have a website that you’d like to scrape to find out what information the website represents. Well the good news is that you can, in Python, using a library named TextRazor. TextRazor allows you to scrape categories, entities and topics from the website. To use this library, you would need an API key, thereafter you can adjust the parameters accepted by the library, as necessary, to extract the information, as required.

Let’s start by getting an API key.

Get an API Key

To get an API key visit the TextRazor site. You’ll need to create an account to get your API key.

Then click ‘Create Account‘. Once you’ve verified your account, you’ll be able to see your API key which you should keep her safe and warm (as Mary Lambert would say :p).

Now that we are all set up, let’s move over to Python and get started there. The documentation of the library is available here.

Install Library – TextRazor

We need to use the TextRazor library to do what we want to do. If you don’t have it installed, you’ll need to install it using this command:

pip install textrazor

Import Libraries

Import the installed textrazor library before being able to install it.

import textrazor

Initialise Library

You’ll need to initialise the variables needed to make the API work. For this you’ll need the URL of the site you want to scrape as well as the API key.

API_Key = '<apikey>'
URL = 'https://www.azlyrics.com/lyrics/marylambert/shekeepsmewarm.html'
textrazor.api_key = API_Key

Select the extractor you’d like to use.

  • Extractor options include: entities, topics, words, phrases, dependency-trees, relations, entailments, senses, spelling
  • Cleanup mode options include: stripTags and cleanHTML
  • Classifier options include: textrazor_iab, textrazor_iab_content_taxonomy, textrazor_newscodes, textrazor_mediatopics and custom classifier name
client = textrazor.TextRazor(extractors=["entities", "topics"])
client.set_cleanup_mode("cleanHTML")
client.set_classifiers(["textrazor_newscodes"])

Assign the analysis of the site to the variable named response.

response = client.analyze_url(URL)

Get the Desired Results

Print out the entities information:

entities = list(response.entities())
entities.sort(key=lambda x: x.relevance_score, reverse=True)
seen = set()

for entity in entities:
    
     if entity.id not in seen:
    
        #print(entity.id, entity.relevance_score, entity.confidence_score, entity.freebase_types)
        seen.add(entity.id)

Print out the topics information:

[ print(topic.json) for topic in response.topics() ]

Print out the categories information:

[ print(category.json) for category in response.categories() ]

Do you like what you see? She says ‘people stare, because we look so good together’. A more complete implementation can be found in this GitHub repo.

[Automation] Making a SOAP API Call in Python, Using a Session Key

Level of Difficulty: Beginner – Senior.

Let’s say you’re working on a system that needs to query a legacy SOAP API. Although SOAP APIs are becoming more scarce as REST and Open Data APIs are taking the world by storm, the reality is that SOAP APIs still exist and are still used. In most SOAP implementations, accessed is managed using sessions, accessible through obtaining a session key. For this post, we will be using the ‘zeep‘ library to make SOAP requests using session keys. We are then going to write the output of the SOAP request to a dataframe using ‘pandas‘.

Install Libraries

To make SOAP requests in Python, you’ll need to install zeep. To write to a dataframe, you’ll need to install pandas if you don’t already have it installed.

pip install zeep
pip install pandas

Import Libraries

There are a few libraries that we need to import before we can do anything meaningful.

# we need this to auth
from requests import Session
from requests.auth import HTTPBasicAuth

# we need this for soap stuff
from zeep import Client
from zeep.transports import Transport

# we need some of this stuff for soap stuff, and some of it jus coz!
import pandas as pd
import datetime

import zeep

Assign ‘Config’ Variables

We need three variables in order to ‘configure’ the request specific to what we are trying to access and what we are trying to access it as:

username = "<name>"
password = "<password which should be secured better than this>"
wsdl = "<your wsdl endpoint>"

The only thing truly different above is ‘wsdl‘. If you were using REST, that would be your HTTP endpoint. The URL, if you’d like.

Setup the Session

Before we can make a SOAP request, we need to setup the session we will use to make the request. It is important to close a session once it has been opened to avoid being locked out which could happen in some instances:

# create the session and auth
session = Session()
session.auth = HTTPBasicAuth(username, password)
client = Client(wsdl, transport=Transport(session=session))

# get the session key!
SessionKey = client.service.SessionStart(Input=SessionStartInputHeader)
key = SessionKey.Result.DataRows.DataRow[0].SessionKey

This is probably the point where it’d be a good idea to get a try catch finally block going.

Build the Request

Okay… Now comes the meaty part of the solution. Let’s try make the request in the ‘try‘ block, print the exception in the ‘catch‘ block and close the session in the ‘finally‘ block:

try:

    # read the API docs - that's where you'll find out what service session data is expected from you
    servicesessiondata = {
        'Key': key
    }

    # assign header and get ready to get the key
    servicesessioninputheader = {"Parameters": servicesessiondata}
    service_key = client.service.ServiceSessionStart(Input=servicesessioninputheader)
    service_key = service_key.Result.DataRows.DataRow[0].ServiceSessionKey

    # read the API docs - that's where you'll find out what request data is expected from you
    request_data = {
        "Parameters": {
            
        },
        "Header": {'ServiceSessionKey': service_key}}

    # get the SOAP response by calling the function
    response = client.service.FunctionName(Input=request_data)

    # manipulating response
    result = zeep.helpers.serialize_object(response)

    # add response to the dataframe
    df = pd.DataFrame(result['Result']['DataRows']['DataRow'])
    df['LoadDate'] = datetime.now()
    df['LoadSource'] = "Python Script"

except Exception as e:

    # print exception
    print(e)

finally:

    # terminate the session so that you don't get locked out!
    terminate_key = {"Header": key}
    client.service.SessionEnd(Input=terminate_key)

Now, hopefully, you have your data from the SOAP request you just made.

Any chance this didn’t work for you? Feel free to reach out or leave a comment below. The full solution is available on GitHub.