Featured

[Automation] Working with SQL transactions in UiPath

Level of Difficulty: Beginner – Senior.

When working with databases, the use of transactions is a very healthy habit to get used to. It will save you from those update or delete statements without a where clause that cannot be undone (especially in prod… Imagine…). UiPath presents the Start Transaction activity which claims to do the same as a traditional transaction. In this post, we will compare the functionality of a transaction in SQL and the use of transactions in UiPath.

Basic Principle of Transactions in SQL

A transaction is traditionally used as a SQL concept which allows the developer to test the outcome of a query without applying any changes to the dataset, or at the very least, undoing any changes made to the dataset. A transaction is started by using the BEGIN TRAN syntax and ended with either ROLLBACK TRAN (undoing any changes made to the data) or COMMIT TRAN (applying the changes made to the data).

This is often used in SQL to handle errors and exceptions. The idea behind this is that if a query or stored procedure contains any errors during execution, all changes will be rolled back. In a case where there were no errors encountered, the changes to the dataset(s) could be applied ‘safely’.

The above query consists of a transaction that is rolled back, meaning that the changes to the data are ‘undone’ or ‘not applied’. The first set of results in the output window show the previous state of the data (before any changes were made), the second set of results show the results after the update statement has been applied. The rollback is executed after the second select statement which means that the third data set shows the current state of the data (with the results of the update statement rolled back). If you were to replace ‘ROLLBACK‘ with ‘COMMIT‘, the second and third sets of data would look exactly the same.

So what happens when there’s a statement that has a very clear error in it? Like, for example, updating a field with the value of 0 divided by 0 (which is undefined). Well, the query will error out and the field won’t be applied:

What happens when we add a query that is supposed to succeed, like inserting records:

Okay… So what if we have multiple statements and one fails? How do we undo all changes? Well… That’s where the exception handling comes in:

What would the results look like if we remove the query with the error? Well… The query would succeed and the changes that result from the queries executed within the transaction would be committed:

So how do we use this in UiPath?

UiPath Implementation of Transactions (Start Transaction)

As per the UiPath docs, the Start Transaction activity within the UiPath.Database.Activities package “connects to a database and features a Sequence which can perform multiple transactions with the database. If UseTransaction is set true, the contained operations are executed in a single transaction, and applied at the end if none of the them failed. If UseTransaction is set false, all operations are committed individually. When this activity ends, the connection to the database is closed. This activity returns a DatabaseConnection variable.”

Let’s test this.

Start by creating a new UiPath process which, for the sake of this example, is named TransactionManagement. Ensure that the UiPath.Database.Activities package is installed:

Drag the Start Transaction activity over into the sequence:

Hit Configure Connection, then select Connection Wizard. The Connection Wizard will ask you what data source (or provider) you’d like to use – select Microsoft SQL Server (SqlClient). Add your server name, authentication details and database name.

Ensure that the provider has been selected and is System.Data.SqlClient:

Set the output of Database Connection to a variable; in this instance I named it databaseConnection:

The databaseConnection variable will be parsed through to each of the queries that exist as part of your transaction:

When executed, you will receive the same error:

If you don’t want the error to be thrown, set the ContinueOnError parameter to True:

If you execute the sequence, there won’t be an error thrown. So does this mean that, if there’s an error that isn’t thrown, that the preceding actions will be rolled back? Let’s test this by adding the insert query:

If you execute this, no error will be raised but the insert statement will persist – the action will not be rolled back. So does this approach work? Not quite… So how can I get this working in UiPath?

Suggested Approach

A good design practise would be to separate responsibilities. If SQL is responsible for manipulating data (which it should be), then the transaction should be executed within SQL and not necessarily in UiPath.

This means you would need to create the Stored Procedure in SQL Server:

CREATE PROC ExecuteUpdate AS

	BEGIN

		SET NOCOUNT ON;
	
		BEGIN TRAN

			BEGIN TRY

				delete from AutoForm.ProjectCreation where ProjectName = 'Test'

				select top(5) AuthMethod from AutoForm.ProjectCreation 

				INSERT INTO [AutoForm].[ProjectCreation]
							([ProjectID]
							,[ProjectName]
							,[ProjectPath]
							,[Server]
							,[Database]
							,[Schema]
							,[Tables]
							,[AuthMethod]
							,[Username]
							,[Password]
							,[Created]
							,[Submitter]
							,[DateSubmitted])
						VALUES
							(newid()
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,'Test'
							,0
							,'Test'
							,getdate())

				select top(5) AuthMethod from AutoForm.ProjectCreation

				update AutoForm.ProjectCreation set AuthMethod = 0/0

				select top(5) AuthMethod from AutoForm.ProjectCreation

				COMMIT TRAN;

			END TRY 
			BEGIN CATCH
		
				ROLLBACK TRAN;

			END CATCH
		
			select top(5) AuthMethod from AutoForm.ProjectCreation

	END;



Call the stored procedure in UiPath:

Change the Command Type to Stored Procedure and enter ONLY the Stored Procedure name into the Sql parameter.

Did this work for you? Drop a comment below if you’ve faced any issues.

Featured

[Business Intelligence] Connecting a Google Sheet to Power BI as a Data Source

Level of Difficulty: Beginner – Senior.

The market for data collection, visualisation and analysis tools spans across many different providers on many different platforms and in some cases, the best way to become exposed to the variety of tools is by mixing and matching between them. So why not visualise a Google Sheet on Power BI? I learnt quite a bit about connecting Power BI to different sources and I experienced some of the power that Google Sheets offer through this solution. Want to try it out for yourself? Here’s how.

What are the steps?

The steps to using a hamburger menu to hide and show a filter pane in Power BI are:

  1. Create the Google Sheet Dataset in Google Drive
  2. Change the Permission
  3. Publish the Google Sheet to the Web
  4. Connect the Dataset to Power BI
  5. Select the Applicable Dataset

Let’s deep dive into the above steps.

Create Google Sheet Dataset in Google Drive

Go to Google Drive and create a new Google Sheet or upload an existing Google Sheet:

Download the Exoplanet Dataset available on data.world if you would like to follow along with the post. Please feel free to use your own dataset if you have one available.

The dataset is a CSV that needs to be turned into a Google Spreadsheet. Ensure that you are using a Google Spreadsheet and not an Excel or CSV document. For the sake of this post, I’ll only be using the first 2 000 records of the dataset. I would strongly suggest using less records with less columns too. Using all of the data will be extremely slow when connecting to Power BI.

Change the Permissions

In order to access the report from Power BI, the dataset needs to be accessible by anyone with the link, especially when publishing the dataset to the web. Change the permissions by clicking on Share and making it publicly available to anyone with the link:

Publish to the Web

Navigate to file then select Publish to the web:

Make sure that you link the publish to the correct tab (if you do not want to publish the whole document). Hit ‘Publish and ‘OK’ when prompted:

Copy the link:

Connect the Dataset to Power BI

Open Power BI, click on Get Data and select Web as the source:

Enter the copied publish URL (found after publishing) and enter it into the URL section, with basic authentication selected:

Select the Applicable Dataset

Once Power BI has connected to the dataset, you’ll be prompted to select the applicable tables (or tabs/datasets) that you can transform and load. Select the applicable tables then select Transform Data:

To get the correct row promoted as headers, you’ll need to remove the first row then use first row as headers. Please note that this may vary across use cases and datasets.

When you’re happy with the structure of your data, hit Close & Apply:

Did this work for you? Feel free to drop a comment below with any issues you may have faced, or even any suggestions you might have.

Featured

[Automation] 10 Tips for Optimising Power Automate Flows

Level of Difficulty: Beginner – Senior.

This image has an empty alt attribute; its file name is automation-1.png

Power Automate is a powerful tool that allows for integrations and RPA solutions to be developed through the use of Power Automate Desktop and Cloud flows. A disadvantage of using Power Automate is that it can take strain when working with large volumes of data. Lengthy flows take long to save, load and execute. Large amounts of data increase the run time of flows exponentially if they aren’t developed optimally. There are a few things that you could try to optimise a flow and reduce the runtime of a flow while maintaining the functionality of the solution.

Power Automate Desktop Flows

1. Remove Commented (or Disabled) Code

A large part of the RPA development process includes debugging a solution while developing the functionality. Often developers comment out or disable actions instead of removing them. These could act as test harnesses (actions used purely for testing) or safety nets (often used when hoarders of code fear deleting code that previously worked). It becomes quite easy to pollute a solution with disabled actions.

When a PA desktop flow is run, disabled actions are read, even though they aren’t necessarily executed. Removing a few disabled actions might not make a massive dent in process duration but it does make a difference.

2. Build Business Exception Checks

When building UI Automation flows, ‘Send hotkey’, ‘Click’ and ‘Get details’ actions are commonly used with ‘On Error’ actions set up. Send hotkeys are efficient ways of executing actions to work around frequently changing user interfaces, although the ‘Send hotkey’ actions can often get lost between screen changes. This could result in many issues.

A good practice for UI automations would be to execute an action and check that the screen is in the correct state before submitting an action. In a case where the screen did not change as expected or that the executed action did not render the expected result, these checks could return Business or Application Exceptions and immediately ending the sub-flow before proceeding to the environment clean-up phase (closing any applications that may have been opened through the flow). These checks should be built into sub-flows as it can be reused and makes debugging easier.

3. Introduce Sub-flows to Group Reusable Logic

It is a good practice to clean up an environment before and after every process run. The clean up process would refer to the termination of any applications that may be open or be used within the process. The aim of this is to reduce the risk of open applications jeopardising the execution of the process. By cleaning the environment, that risk is mitigated. This is just one example of logic that can be grouped by building a single clean-up sub-flow and invoking or ‘running’ the sub-flow at all points where it is applicable. The deduplication of code declutters a solution and contributes heavily to the advantage of modularisation as a concept in solution development.

4. Ensure That ‘On Error’ Actions are Setup Correctly

As previously mentioned, ‘Clicks’ and ‘Get details’ activities are common with UI automations. These activities come with the ability to configure ‘On Error’ actions. These actions allow for retries, delays, setting variables, executing specific actions and running sub-flows. Ensure that the waiting time between retries is feasible. Decreasing these wait times and rather adding checks could decrease the execution time of a process. By grouping exception handling logic in a sub-flow (see above), the ‘On Error’ option can be configured to execute the exception handling sub-flow instead of waiting for a failure and duplicating code.

5. Reduce the Amount of Waits Being Used

When using a concatenation of hotkeys and clicks, waits (and wait untils) are commonly used to ensure that an application is ready for input. These waits could turn into endless loops and prolong process execution times more than is necessary. By building in checks and limits, these process execution times could be reduced with more indicative measures of where the process encounters errors.

6. Use clicks rather than send hot keys

As far as possible, rather use clicks that send hotkeys when building UI automations. Although send hotkeys execute faster, clicks allow for more stable execution and also allows for the configuration of ‘On Error’ actions.

7. Write Input Data to Textfile

In a case where large amounts of data are being parsed through to the desktop flow from the cloud flow, consider writing the input variable contents to a textfile. The aim would be to enable unit testing of Power Automate solutions between Desktop and Cloud flows. The data from the cloud flow would be written to a textfile before the process is executed. If the input variables are blank, read the contents from the textfiles to execute the process with a previous runs data. This might not have a direct impact on execution time, but it does allow for decreasing development and testing times when executing the flows in individual units.

It is important to note that cloud flows execute desktop flows through the gateway at a different pace than desktop flows are executed directly on a computer.

Power Automate Cloud Flows

1. Remove Loops That Process Items Individually

Connectors, like the Excel connector, only allow for items to be processed on a ‘line-by-line’ basis. The more items that there are to process, the longer the process will take to execute. By bulk processing these items, the risk of exponentially increasing process duration is mitigated to some degree. This could be done through introducing other components, like Azure functions (which come with their own challenges). A good example of this can be found in this post.

Although there might still be an increase in process duration when dealing with more data, the increase will not be as exponential as individual connector calls. A lot of the connectors available on Power Automate make use of API calls, some of which have ‘x-calls-per-minute’ or ‘x-calls-per-hour’ limits which could also increase process duration.

2. Rework Loop Levels

Power Automate cloud allows for a maximum depth of 9 levels. Scopes are really useful when grouping actions logically although they are considered to be a ‘level’ of ‘loops’ which does have an affect on execution times. In a case where logical validations go further down than 6 levels, it would be advised to rather set variables and process logic based on conditional variables rather than adding loops within loops. Especially when adding ‘apply to each’ loops within ‘apply to each loops’. The theory behind the Big O notation explains why this has such a significant impact on process duration in some case. It is advisable that such complex logic be reworked as far as possible.

3. Introduce Sub-flows to Group Reusable Logic

It is important that flows that are built as automation solutions have some form of exception handling and logging. These components can be built as a separate flow to group the logic of handling and logging exceptions which can be reused each time that an exception or error occurs. By logging exceptions, the ability to report on the successes and failures becomes a lot more feasible. This is an example of code that could be grouped and reused.

Do you have any other tips for optimising Power Automate solutions? Drop a comment below.

Featured

[Automation] A Comparison Between UiPath and Power Automate

Level of Difficulty: Beginner – Senior.

Automation is becoming a hot topic in conversations about the fourth industrial revolution (4IR) and fifth industrial revolution (5IR). Automation is referred to as the use of technology to replace human labour or even work alongside people in some way, leading to some sort of benefits (or saving) of having automation implemented. Automation comes in different forms, shapes and sizes with a lot of emphasis placed on Business (or Digital) Process Automation (BPA/DPA) and Robotic Process Automation (RPA).

While RPA is focused on mimicking a person’s actions, behaviour and decisions, BPA and DPA are focused on automating business processes with specific focus on efficiency and optimisation. UiPath was identified by Gartner as a leading RPA technology in 2020 and 2021 with Microsoft also identified as a leader in the RPA in 2021. Microsoft are positioning Power Automate and Power Automate Desktop as a technology stack that enables citizen RPA development.

Magic Quadrant Leader for RPA | Automation Anywhere

UiPath was founded in 2005 with their first desktop automation released in 2013 with Microsoft first releasing Microsoft flow (now known as Power Automate) late in 2016. Power Automate Desktop went into public preview in December 2020. UiPath definitely has the upper hand in maturity, in terms of the RPA offering, with Microsoft displaying much potential for growth.

In their immediate states, there are a few different comparisons that can be made between the two platforms when developing enterprise-level automation architectures.

Functionality

In terms of functionality, UiPath definitely does have the upper hand when comparing the can’s and cannot’s between the UiPath and Power Automate (PA), in both cloud and desktop automation capabilities.

FunctionalityUiPathPower AutomateComment
Create Own ActivitiesYesNoUiPath allows for activities to be developed by the community and published for reuse by other developers. This has helped mature the eco-system significantly in terms of functionality and might slow down the progress of Microsoft getting to a more ‘stable’ place with some of the activities.
HTTP RequestsYesNoHTTP requests are useful when using APIs which are way more reliable than front-end automation. This can be implemented from the cloud flows but it doesn’t allow for optimised development of flows. HTTP Requests also requires Premium licensing on Power Automate.
Convenient use of Try Catch blocksYesNoAlthough there is functionality on how to use ‘on error’ blocks, which are the equivalent of ‘try catch’ blocks on PA Desktop, using these are tricky because it still throws errors.
Global Exception HandlerYesNoA global exception handler in UiPath catches any error that is thrown and allows the developer to decide how to deal with these errors on a ‘per solution’ basis. This would come in very handy for any RPA tool, especially where front-end development is taking place and application exceptions can be unpredictable when change is involved.
Renaming of ActivitiesYesNoPA Desktop does not yet allow you to rename activities which makes identifying exactly which activity failed really tedious when looking at the errors returned by the Desktop flow on the cloud flow.
Pin ‘on error’ next action to subflow or set variableNoYesPA Desktop allows a number of different options when configuring the ‘on error’ portion of a get element details or click activity. This is something that would need to be handled in a try catch statement on UiPath for most activities.
Scoped variablesYesNoAll variables in PA Desktop are ‘global’ and can be accessed from anywhere in the flow, the disadvantage is more memory usage, especially when storing elements with a lot of JSON data (for example). On the other hand, this supports motions presented in DoDAF architectures.
Set input and output variable type to anything other than textYesNoThis makes the use of booleans quite difficult.
Custom creation and control over variables (including type)YesNoIn PA Desktop, the variable creation is done as a result of an object or using set variable. The type is therefor, dependent on the result type.
Use Excel documents without Excel being installedYesNoUiPath allows for a workbook to be referenced and used without the need for Excel to be installed. PA Desktop doesn’t allow for this yet. Hopefully it will soon to avoid additional licensing.
Built-in asset and credential managementYesNot reallyUiPath allows for the storage of credentials and assets on the orchestrator. PA works best when storing credentials in Azure Key Vault which requires Premium licensing.
Built-in queueing mechanismYesNoThe UiPath orchestrator allows for queues to be created, with different triggers which allows for items to be queued for processing based on a multitude of factors, like priority, deadlines, etc. The Power Platform do not have this built in natively yet and require Azure resources to assist with this (resulting in Premium licensing).
Bulk processing items in ExcelYesNoUiPath allows for the manipulation of Excel data in ranges, meaning that more than one row can be inserted, deleted or updated at the same time. The Excel connector on PA Cloud only allows CRUD on a ‘per row’ basis which is not at all optimal when processing larger files. For time optimisation, different solutions will need to be investigated, like using an Azure Function or some other storage mechanism. Possibly even the extended use of formulas.
Trigger a process when receiving an emailNoYesUiPath processes would need to troll a mailbox for changes, whereas the cloud flows are triggered when a new email is received. This is true for most connectors on the PA Cloud stack.
Edit selectors or elements of UI AutomationYesNoUiPath allows you to turn selectors into dynamic selectors, making them reusable and less specific. PA Desktop only allows you to add elements as they are and cannot be edited. Getting these elements set up can be more tedious when using PA Desktop than UiPath as ERPs, like SAP (for example), requires more configuration and access than UiPath does as PA Desktop needs transactions to be executed.
Level of restrictionNoYesPower Automate does not allow for a depth of more than 9 levels of scope, loops and conditions which makes sense in terms of performance but without the use of sub-flows (like invoking workflows in UiPath), this becomes difficult to manage on more complex flows on the PA Cloud flows.
Access log runsYesKindaSo to access logs automagically, you need to use some PowerShell. It’s not as easy to access as using an API or database (if the UiPath orchestrator is on-prem).
Detailed time execution of each stepNoYesPA allows you to see how long each action took on the cloud flows run history. To do this with UiPath, you’d need to write some custom logging and calculations.
Decision to upgrade packages and rollback to previous versionsYesNoIt would be useful to be able to rollback to previous versions of connectors when there are bugs that occur which breaks something that was previously working.

Ease-of-use

In terms of the ease-of-use and implementation of different automation categories and capabilities, Power Automate maintains the upper hand for integrations with the Microsoft stack and ecosystem while UiPath maintains the upper hand for categories that make use of more ‘custom’ types of automation.

ActivityUiPathPower Automate
SAP AutomationX
UI AutomationX
Web AutomationX
Mobile AutomationX
Testing AutomationX
Database ManipulationX
Word ManipulationX
Excel ManipulationX
SharePoint ManipulationX
OneDrive ManipulationX
SalesForce IntegrationX
Microsoft Teams IntegrationX
Microsoft Outlook IntegrationX
Microsoft Graph IntegrationX
Azure IntegrationX
Artificial IntelligenceX

Development Time

Power Automate is really powerful when you’re building an integration flow that contains a few actions which use a small number of connectors (between 1 and 5) that encapsulate quick and easy-to-use API calls. As soon as the flow is expected to deal with a few additional business rules or process steps, the platform loses efficiency. The more actions are used, the slower the flow takes to load (especially when trying to debug historical runs). UiPath maintains the upperhand when developing more complex solutions. It is much easier to build modular components and bring them all together using UiPath. Developing a solution in modules can reduce development time. Waiting for a platform to respond can have a drastic affect on development time. It is easy to get distracted, as a developer, waiting for a flow to load or to run a test.

FeatureUiPathPower Automate CloudPower Automate DesktopComment
Unit TestingXXUiPath allows you to create sequences and workflows in units which can be invoked within the main workflow. It is difficult to effectively invoke subflows in PAC, especially taking licensing implications into account.
DebuggingXXUiPath allow developers the option to debug or run a solution which allows the developer the ability to either fast track the testing process by running the process, while watching the logs or debugging each step of the solution run through debug mode. UiPath allows the developer to debug activities, workflows and sequences in units, which speeds up the testing and development process. Although PA Desktop allows for the ‘Run from action’ functionality, it is not available on PAC.
Realtime LoggingXUiPath allows for information to be logged at different levels (like trace, verbose, information, warning and error) while PAD and PAC only displays warnings (more common on PAC) and errors. By allowing developers to monitor logs as the flow runs, it is easier to determine which steps have been successfully passed and which steps the process might be stuck on.
Developer UI ExperienceXXIt is tedious to work on PAC in a browser window because the platform responds differently between browsers. Sometimes browsers crash and all work is lost. One cannot save the flow if there are errors. Commenting out (or disabling) code is not yet allowed on PAC, however, there is a work around available. This is not efficient as it could allow developers the ability to save their work more frequently.

The copy and paste functionality on PAC is buggy and doesn’t always work.

The above two points are not applicable in UiPath

Power Automate adds the most value when utilising the licensing models that are already in use within an organisation to build automations that do not require additional (or premium) licensing. There are multiple ‘basic’ templates that allow for quick and easy automations and integrations. Power Automate Premium licensing allows more solutions to be built with more advanced capabilities. Although Power Automate solutions are scalable, doing so can be quite tedious. The Power Platform does allow for multiple licensing options.

UiPath adds the most value in an environment where an automation eco-system is being formed to cater for enterprise-level solutions that should be easily scalable. The UiPath offering includes a wide range of products that supports their matured automation and hyperautomation lifecycle. There are multiple licensing models available for the offerings provided by UiPath.

There is a lot of value that can be gained by implementing a hybrid approach to automation where UiPath and Power Automate are used, in conjunction, to provide powerful integrations and automations within an organisation’s Automation Center of Excellence (CoE). Both technologies have a lot of potential, with exponential growth predicted within the next few years. Each have their own benefits and shortfalls which make them competitive in their own ways.

Apart from the licensing aspects, the functionality and ease-of-use and implementation observations are also applicable to Azure Logic Apps.

Please note that all of these were findings captured at a specific point in time and may have been improved since the post was published.

Featured

[Azure] In-Stream Excel Manipulation in .NET Core, using Azure Blob Storage and Azure Functions

Level of Difficulty: Junior – Senior.

So you have an Excel template stored in Azure Blob Storage which will be read and manipulated by inserting data into the Excel template returning the manipulated file in stream. This can either happen (Time Trigger) on a schedule or per request (HTTP Trigger) basis. The decision is to build the logic out using a C# Azure Function. The data to be inserted into the Excel Template will be sent into the function as the request body.

The Azure Function will form part of a larger solution which will send the data into the Azure Function. The file content of the Excel template will be read from Azure Blob storage the data received in the request body will be inserted before sending the new file content back in the response body.

A possible use case of this could be bulk processing a file that is being used in a Power Automate cloud flow. The existing Excel connector allows for single entries into an Excel document once the data has been manipulated which can become quite time consuming. To optimise such a solution, the Azure Function could handle the data validation, manipulation and bulk entry in a much more timely manner.

Depending on how the consumption requirement, an Azure Function could be a rather cost-effective solution. Azure Functions can be licensed using an app service plan, a consumption plan or a premium plan. The consumption plan comes with a free grant which allows a free grant of 1 million executions per month, per subscription. When deploying an Azure Function, a Blob Storage resource is required. In a case where a Blob Storage resource does not exist, one will be created when deploying a new Azure Function from Visual Studio. App Insights are not required when deploying an Azure Function, however, it is an extremely useful resource to use when debugging the Function.

What are the steps?

The steps that we will be following are:

  1. Create Azure Function Project (Using .NET Core 3.1)
  2. Create Excel Document
  3. Upload Document to Blob Storage
  4. Add Functionality to the Azure Function
    1. Install Packages
    2. Create Models
    3. Connect Blob Storage
    4. Add Manipulation Logic
  5. Test
  6. Publish

Let’s dive deeper.

Create an Azure Function Project

For the creation and deployment of the Azure Function, we will be using Visual Studios. Before you open Visual Studio, open the Visual Studio Installer and ensure that Azure development is installed, as well as ASP.NET and web development:

Once those have been installed and VS has been launched, Create a new project, select C# Azure Function

Configure your project appropriately before hitting Create. Note that the name of your project does not necessary depict the name of your function. That can still be changed.

Ensure that you have .NET Core 3 installed as Azure Functions currently support 3.1 and not 2.1. You can install the .NET Core 3 SDK here. Select Http trigger and set up the blob storage account or use the emulator if you don’t have a blob storage account then setup the authorization level, before hitting Create.

Let’s rename the Function1.cs file to ProcessInformation.cs. Ensure that you change the class and function name too, before saving the changes (Ctrl+S):

Next, let’s create the Excel Document.

Create an Excel Document

For the purpose of this use case, we will be creating an Excel document, named Information, with one sheet, named Details, with a table named PersonalDetails. The PersonalDetails table will contain three columns, namely: Name, Surname and Date of Birth.

To insert a table on a worksheet, navigate to the Insert tab, select Table then select the range and check the ‘My table has headers’ checkbox, before hitting OK:

Once the table has been created, change the column names. To change the table name, focus on the table by clicking on one of the cells in the table, the Table Design tab will pop up. Navigate to the Table Design tab and change the Table Name:

Save the document and prepare for upload to Blob Storage.

Upload Document to Blob Storage

For the sake of this post, I am going to assume you already have a Blob Storage resource setup. If you don’t, check out this post on how to create blob storage and upload files to a container.

Navigate to your blob storage account and relevant container before selecting Upload. Navigate to the file that you would like to upload and select Upload.

Let’s get back to the Azure Function and start adding functionality.

Add Functionality to the Azure Function

Before we can add the necessary functionality to the Azure Function, we first need to install the relevant packages, using NuGet Package Manager.

Install Azure.Storage.Blobs

Azure.Storage.Blobs is used to read files from and write files to the relevant Azure Blob Storage Account, based on the connection string provided.

Navigate to the NuGet Package Manager (Tools > NuGet Package Manager > Manage NuGet Packages for Solution…), then search for Azure.Storage.Blobs and install the latest stable version of the package.

Install ClosedXML

ClosedXML is used to manipulate the Excel document.

Navigate to the NuGet Package Manager (Tools > NuGet Package Manager > Manage NuGet Packages for Solution…), then search for ClosedXML and install the latest stable version of the package.

Now that the libraries are installed, let’s create the models that we need to define for the input data that will be received from the request body as well as the data structure for the Excel table.

Create PersonalDetails Model

The PersonalDetailsModel will be used to represent the structure of the data to be written to the Excel table named PersonalDetails.

The code for the model can be added as a partial class to the bottom of the Process Information class. The code for the model should look something like this:

partial class PersonalDetails
{
    [JsonProperty("Name")]
    public string Name { get; set; }
    [JsonProperty("Surname")]
    public string Surname { get; set; }
    [JsonProperty("DateOfBirth")]
    public string DateOfBirth { get; set; }
}

Create InputData Model

The InputDataModel will be used to represent the structure of the data to be received as the request body. The information passed through as the request body needs to be written to the Excel table named PersonalDetails.

The code for the model can be added as a partial class to the bottom of the Process Information class. The code for the model should look something like this:

partial class InputData
{
    [JsonProperty("PersonalDetails")]
    public List<PersonalDetails> PersonalDetails { get; set; }
}

In order to use the List capability, ensure that you’ve added System.Collections.Generic to your using statements.

Read File From Blob

Ensure that you add Azure.Storage.Blob to the using statements as a reference before creating the method that will read the file from the storage account.

You can create a class named BlobStorage beneath the ProcessInformation class which will contain the method, named ReadTemplateFromBlob, which reads the file from your storage account and returns the content as a MemoryStream. The class, with the method will look something like this:

class BlobStorage
{
    public MemoryStream ReadTemplateFromBlob()
    {
        // TO DO: Add Connection String and move into secure storage
        string connectionString = "<connectionstring>";
        string container = "excel-templates";

        // Get blob
        BlobContainerClient blobContainerClient = new BlobContainerClient(connectionString, container);
        var blob = blobContainerClient.GetBlobClient("Information.xlsx");

        MemoryStream memoryStream = new MemoryStream();

        // Download blob
        blob.DownloadTo(memoryStream);

        return memoryStream;
    }
}

The reason we create a class is to allow for the expansion of the code to encapsulate any other functionality that is specific to BlobStorage.

Add Excel Processing Logic

Now, let’s create some functionality that will allow us to write data into the Excel document. Don’t forget to add the ClosedXML.Excel to the references.

Create a class named ExcelProcessing beneath the ProcessInformation class which will contain the method, named PopulateExcelTable, which will take in the data to be written to Excel along with the name of the sheet and table that the data should be written into. The method will return the content as a MemoryStream. The class, with the method will look something like this:

class ExcelProcessing
{
    public MemoryStream PopulateExcelTable (List<PersonalDetails> personalDetailsList, MemoryStream stream, string sheetName, string tableName)
    {
        // Link to existing workbook
        using var wbook = new XLWorkbook(stream);

        // Link to sheer
        var ws = wbook.Worksheet(sheetName);

        // Set row offset
        int currentRow = 2;
        
        // Replace data in existing Excel Table
        var table = wbook.Table(tableName);
        table.ReplaceData(personalDetailsList, propagateExtraColumns: true);

        // Save file
        wbook.SaveAs(stream);

        // Create new stream
        MemoryStream memoryStream = stream;

        // Return stream
        return memoryStream;
    }
}

The reason we create a class is to allow for the expansion of the code to encapsulate any other functionality that is specific to ExcelProcessing, like reading from Excel, for example.

There is some brilliant functionality available for further manipulating an Excel document, including the use of formulas and formatting.

Now let’s bring this together in the main function.

Add Function Logic

The Azure Function is going to be used to receive the request body, deserialise it and parse it through to the ExcelProcessing class to populate the Excel table. Once the table has been populated, the content of the Excel document will be returned as a MemoryStream. The code for the static function class should look a little something like this:

public static class ProcessInformation
    {
        [FunctionName("ProcessInformation")]
        public static async Task Run(
            [HttpTrigger(AuthorizationLevel.Function, "get", "post", Route = null)] HttpRequest req,
            ILogger log)
        {
            log.LogInformation("C# HTTP trigger function processed a request to process information.");

            try
            {
                // Create blank request body
                string requestBody = String.Empty;

                // Populate request body
                using (StreamReader streamReader = new StreamReader(req.Body))
                {
                    requestBody = await streamReader.ReadToEndAsync();
                }

                log.LogInformation("Request body received.");

                // Convert JSON string to object
                var data = JsonConvert.DeserializeObject<InputData>(requestBody.ToString());

                log.LogInformation("JSON deserialised and parsed through model.");
                
                // Create new list of PersonalData and populate with InputData
                List<PersonalDetails> personalDetailsList = data.PersonalDetails;

                log.LogInformation("New list of personal details created.");

                // Save Excel Template file from Blob Storage
                BlobStorage blobStorage = new BlobStorage();
                var stream = blobStorage.ReadTemplateFromBlob();

                log.LogInformation("Template retrieved from blob.");

                // Populate Excel Spreadsheet
                ExcelProcessing excelProcessing = new ExcelProcessing();

                var excelStream = excelProcessing.PopulateExcelTable(personalDetailsList, stream, "Details", "PersonalDetails");

                log.LogInformation("Excel populated");

                // Create response
                var response = req.HttpContext.Response;
                response.ContentType = "application/json";

                // Create stream writer and memory stream
                using StreamWriter streamWriter = new StreamWriter(response.Body);

                // Add the memory stream to the stream writer/request.Body
                await streamWriter.WriteLineAsync("[\"" + Convert.ToBase64String(excelStream.ToArray()) + "\"]");
                await streamWriter.FlushAsync();
            }
            catch (Exception e)
            {
                log.LogInformation($"Error processing information: {e.Message}");
            }
        }
    }

You can test the solution locally, using Postman.

Test the Azure Function

Ensure that Postman is installed then run the Azure Function. Grab the URL that pops up in the command line and paste that into Postman:

Hit the body tab, click on the raw radio button, change the type to JSON and paste the following into the body:

{
    "PersonalDetails": [
        {
            "Name": "Jacqui",
            "Surname": "Muller",
            "DateOfBirth": "1996/07/27"
        },
        {
            "Name": "John",
            "Surname": "Doe",
            "DateOfBirth": "2000/01/01"
        }
    ]
}

When you hit Send, you should see a response similar to this:

This response can be converted to binary to save the file which can be incorporated into the Power Automate flow.

Lastly, we need to publish this solution to Azure from Visual Studio.

Publish

Navigate to the solution explorer and double click on Connected Services. If you don’t see connected services, right click on the function and select Publish:

Set up your publish profile by selecting Azure as the target of the deployment, then hit Next:

Next, select Azure Function App (Windows), before selecting Next:

If you don’t already have a resource, add a new one:

Fill in and validate all required details, before hitting Create:

Once the resource has been created, you can hit Publish:

Once your Azure Function has been successfully published, you may test the function using Postman and enhance as you see fit.

This solution is available on Github.

Did this help you? Did you get stuck? Feel free to reach out by leaving a comment below.

Featured

[Automation] Using Google Drive, Google Sheets and Python in Google Colaboratory to Automate Rubric Creation and Synchronisation

Level of Difficulty: Intermediate – Senior.

Have you ever run into a situation where you have over 200 students whose work needs to be marked and the admin around it is enough to make anyone go crazy? Well here’s a step towards getting around the labour-intensive admin that goes hand-in-hand with marking student projects, using Google Drive, Google Sheets and a Google Colaboratory Python script.

What are the steps?

The steps that we will be following are:

  1. Setup the project in the Google Developer Console
    1. Create a service account
    2. Download the service account key file
  2. Create the Google Drive folder and content
    1. Create a new Google sheet for the rubric template
    2. Create a new Google sheet for the class list
    3. Create a new Google sheet for the class results
  3. Create the Python script
    1. Create ‘Replicate Rubric for Students’ script
    2. Create ‘Update Class Results’ script

Deep Dive

Let’s dive deeper into the steps listed above.

Setup Project in Google Developer Console

Create Service Account

Create a New Project in the Google Developer Console:

Make sure your newly created project has been selected before you start enabling APIs and creating service accounts:

Under APIs & Services, navigate to Credentials:

Add a new Service account:

Give your service account a name:

Retrieve the Service Account Key

Edit the service account you’ve just created in order to create a key:

Select Keys, then select Create New Key from the Add Key dropdown:

Select JSON then click Create and Save the Key.json file to your computer:

Enable APIs

Navigate to and enable the Google Drive API:

Navigate to and enable the Google Sheets API:

Create Google Drive Folder and Content

Please note that all the Google Drive Content is available in the GitHub Repo if you’d like to upload the files I’ve used for this post.

Navigate to Google Drive and Create a root folder that you will use to store all of your collateral:

Share the folder you’ve just created with the service account that you’ve also just created:

Create a new Google Sheet to store the template of your rubric to be replicated for each student:

The template rubric may look something like this:

In this case, the Indicator field will contain either a 1 (for yes, criteria is present in student submission) or a 0 (for no, criteria is not present in student submission), resulting in the mark either equating to the weight of the criteria, or 0. The formula of the mark field may need to be adapted to cater for your needs if you are using a sliding scale form of evaluation.

Next, create a new Google Sheet to store the list of student information to be used:

Create a new Google Sheet named Results to store the class results:

Create Python Script

Replicate Rubric for Every Student

You’ll first need to run the following commands to ensure that the following libraries are installed:

pip install oauth2client
pip install PyOpenSSL
pip install gspread
pip install pydrive

Then you’ll want to import the following libraries:

import pandas as pd
import gspread
import io

from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from oauth2client.service_account import ServiceAccountCredentials
from google.colab import files #comment out if you aren't using Google Colab

Next, you’ll want to initialise your authorisation with Google, replacing ‘<YOUR KEY FILE PATH>‘ with the path to the service account key downloaded from the Google Developer Console:

# Init Google (with Auth)

file_path = r"<YOUR KEY FILE PATH>.json"

scope = ['https://spreadsheets.google.com/feeds',
         'https://www.googleapis.com/auth/drive']

credentials = ServiceAccountCredentials.from_json_keyfile_name(file_path, scope)

# Used to auth Google Sheets
gc = gspread.authorize(credentials)

# Used to get all files in Google Drive folder
gauth = GoogleAuth()
gauth.credentials = credentials
drive = GoogleDrive(gauth)

In order to create a rubric for each student, you’ll need to access the populated class list created in the Google Drive Folder:

# Open Spreadsheet
spreadsheet = gc.open('Class List')

# Get Student Class List
student_list_sheet = spreadsheet.get_worksheet(0)
student_list = student_list_sheet.get_all_values()
student_headers = student_list.pop(0)

# Read Student Class List into DataFrame
df_students = pd.DataFrame(student_list, columns=student_headers)

df_students

For every student in the class list (df_students), create the rubric. Replace ‘<YOUR FOLDER ID>‘ with the folder ID in which you’d like to store all of the student rubrics. It would be advisable to create a subfolder, named Student Rubrics in the Google Drive root folder:

template_spreadsheet = gc.open('Template_Rubric')

detailed_rubric = template_spreadsheet.get_worksheet(0)
rubric_summary = template_spreadsheet.get_worksheet(1)

for index, element in df_students.iterrows():
  
    print(element['Student Number'])

    # Create Workbook
    workbook = gc.create('{}_Rubric'.format(element['Student Number']), folder_id='<YOUR FOLDER ID>')

    # Update Rubric Path (in Student Class List)
    sheet_id = workbook.id
    sheet_url = 'https://docs.google.com/spreadsheets/d/{}/'.format(sheet_id)

    # Update Student Rubric Path
    student_list_sheet.update('D{}'.format(index + 2), sheet_url)

    # Duplicate Spreadsheets
    detailed_rubric_worksheet = detailed_rubric.copy_to(spreadsheet_id=workbook.id)
    rubric_summary_worksheet = rubric_summary.copy_to(spreadsheet_id=workbook.id)

    # Delete Sheet1
    worksheet = workbook.sheet1
    workbook.del_worksheet(worksheet)

    # Get Duplicated Spreadsheets
    student_spreadsheet = gc.open('{}_Rubric'.format(element['Student Number']))
    detailed_rubric_worksheet = student_spreadsheet.get_worksheet(0)
    rubric_summary_worksheet = student_spreadsheet.get_worksheet(1)

    # Update Sheetnames
    detailed_rubric_worksheet.update_title('Detailed Rubric')
    rubric_summary_worksheet.update_title('Rubric Summary')

    # Update Student Information
    detailed_rubric_worksheet.update('B1', str(element['Initials']))
    detailed_rubric_worksheet.update('B2', element['Surname'])
    detailed_rubric_worksheet.update('B3', element['Student Number'])

    # Update #REFs in Student Rubric - Student Information
    for row in range(1, 5, 1):

        # Get Formula  
        cell = rubric_summary_worksheet.acell(f'B{row}', value_render_option='FORMULA').value

        # Set Formula
        rubric_summary_worksheet.update(f"B{row}", cell, raw=False)

    # Update #REFs in Student Rubric - Student Summary Marks
    for row in range(7, 19, 1):

        # Get Formula    
        cell = rubric_summary_worksheet.acell(f'D{row}', value_render_option='FORMULA').value

        # Set Formula
        rubric_summary_worksheet.update(f"D{row}", cell, raw=False)

The above code block will not only create a new rubric, but also fill in the information available in the class list and correct any corrupt formulas. If you change the contents of your rubric, please pay careful attention to the indexes used above to update formulas. If you are not using formulas extensively, feel free to remove the formula manipulation code.

Update Class Results

Create a method, named GetData, which will pull of the rubric data from the student rubrics. This method will be reused to process the data from every student’s rubric file stored on the drive:

def GetData(workbook):
    
    # Open Spreadsheet
    spreadsheet = gc.open(workbook)

    # Retrieve Student Number From Spreadsheet
    student_number = spreadsheet.get_worksheet(1).get('B3') # saves as list of list
    student_number = student_number[0][0]

    # Get Detailed Rubric
    detailed_rubric = spreadsheet.get_worksheet(0).get('A6:H42')
    rubric_headers = detailed_rubric.pop(0)

    # Get Category Rubric
    category_rubric = spreadsheet.get_worksheet(1).get('G6:J9')
    category_rubric_headers = category_rubric.pop(0)

    # Get Sub-Category Rubric
    sub_category_rubric = spreadsheet.get_worksheet(1).get('A6:E17')
    sub_category_rubric_headers = sub_category_rubric.pop(0)

    # Get Total
    total = spreadsheet.get_worksheet(1).get('I10') # saves as list of list
    total = total[0][0]

    # Read Rubrics into DataFrame
    df_category_rubric = pd.DataFrame(category_rubric, columns=category_rubric_headers)
    df_sub_category_rubric = pd.DataFrame(sub_category_rubric, columns=sub_category_rubric_headers)
    df_detailed_rubric = pd.DataFrame(detailed_rubric, columns=rubric_headers)
    
    # Return all of the dataframes, the student_number and the total
    return df_category_rubric, df_sub_category_rubric, df_detailed_rubric, student_number, total

Create a method, named ProcessResults, which will process and write the results from the student rubric file into the main class results file. This method will be reused by the method created below to store the results on different levels of granularity, (Category, Sub-Category, Criteria and Results) in different sheets:

def ProcessResults(df_entry_raw, sheet_index, student_number, transpose=True, has_student_number=False):
    
    try:
    
        if transpose:
            
            # Transpose dataframe
            df_entry_transpose = df_entry_raw.transpose()

            # Make the Result Row the Headers
            df_entry = df_entry_transpose.rename(columns=df_entry_transpose.iloc[0])
            
            # Remove the Result Row
            df_entry = df_entry.iloc[1: , :]

            # Remove 'Mark' as the Index
            df_entry = df_entry.reset_index(drop=True)


        else:
            
            df_entry = df_entry_raw
            
        # If the student number is not provided in the dataset, assign it
        if has_student_number == False:
        
            # Add Student Number Column
            df_entry['Student Number'] = student_number

            # Move Student Number Column to the Front of the DataFrame
            cols = list(df_entry.columns)
            cols = [cols[-1]] + cols[:-1]
            df_entry = df_entry[cols]
        
        # Write to Excel if something changes, get the new columns and add it to the Results sheet
        #df_entry.to_excel('{} Results.xlsx'.format(sheet_index), index=False)
        
        # Open Results Spreadsheet
        spreadsheet = gc.open("Results")

        # Records
        results_worksheet = spreadsheet.get_worksheet(sheet_index)
        results_records = results_worksheet.get_all_values()

        results_headers = results_records.pop(0)

        # Read Results Records into DataFrame
        df_results = pd.DataFrame(results_records, columns=results_headers)

        # Check if Student Results Already Recorded
        if (df_results['Student Number'] == student_number).any():

            # Get Index of Current Student Number
            indexes = df_results.index[df_results['Student Number'] == student_number].tolist()

            for index in indexes:

              # Drop old Record
              df_results = df_results.drop(index)

        # Add Student Record
        df_results = df_results.append(df_entry)

        results_worksheet.update(
            [df_results.columns.values.tolist()] + [[vv if pd.notnull(vv) else '' for vv in ll] for ll in df_results.values.tolist()]
        )
        
        return True
        
    except Exception as e:
        
        print(e)
        
        return False

Create a method, named ProcessStudentResults, which will break up the results in different levels of granularity and parse it through ProcessResults to be written into the class results workbook:

def ProcessStudentResults(workbook):

    status = False

    results = GetData(workbook)

    df_category_rubric = results[0]
    df_sub_category_rubric = results[1]
    df_detailed_rubric = results[2]
    student_number = results[3]
    total = results[4]

    # Filter columns: Detailed Results 
    df_entry_raw = df_detailed_rubric[['Criteria','Mark']]
    # criteria table: first tab (0) in workbook
    status = ProcessResults(df_entry_raw, 0, student_number=student_number)

    # Detailed Results Processed Successfully -> Filter columns: Sub-Category Results
    if status:

        print('Detailed Results Updated')

        df_entry_raw = df_sub_category_rubric[['Sub-Category','Mark']]
        # second tab (1) in workbook
        status = ProcessResults(df_entry_raw, 1, student_number=student_number)

    else:

        print('Detailed Results NOT Updated')
      
    # Sub-Category Results Processed Successfully -> Filter columns: Category Results 
    if status:

        print('Sub-Category Results Updated')

        df_entry_raw = df_category_rubric[['Category','Mark']]
        # third tab (2) in workbook
        status = ProcessResults(df_entry_raw, 2, student_number=student_number)

    else:

        print('Sub-Category Results NOT Updated')

    # Category Results Processed Successfully -> Add Total and Display
    if status:
      
        print('Category Results Updated')

        df_entry_raw = pd.DataFrame(data=
                                  {
                                      'Student Number': [student_number], 
                                      'Total': [total]
                                  }, 
                                  columns=['Student Number','Total'])

        # Results tab (tab 3) in the file
        status = ProcessResults(df_entry_raw, 3, transpose=False, has_student_number=True, student_number=student_number)

        if status:

            print('Total Updated')

        else:

            print('Total NOT Updated')

    else:

        print('Category Results NOT Updated')def ProcessStudentResults(workbook):

    status = False

    results = GetData(workbook)

    df_category_rubric = results[0]
    df_sub_category_rubric = results[1]
    df_detailed_rubric = results[2]
    student_number = results[3]
    total = results[4]

    # Filter columns: Detailed Results 
    df_entry_raw = df_detailed_rubric[['Criteria','Mark']]
    # criteria table: first tab (0) in workbook
    status = ProcessResults(df_entry_raw, 0, student_number=student_number)

    # Detailed Results Processed Successfully -> Filter columns: Sub-Category Results
    if status:

        print('Detailed Results Updated')

        df_entry_raw = df_sub_category_rubric[['Sub-Category','Mark']]
        # second tab (1) in workbook
        status = ProcessResults(df_entry_raw, 1, student_number=student_number)

    else:

        print('Detailed Results NOT Updated')
      
    # Sub-Category Results Processed Successfully -> Filter columns: Category Results 
    if status:

        print('Sub-Category Results Updated')

        df_entry_raw = df_category_rubric[['Category','Mark']]
        # third tab (2) in workbook
        status = ProcessResults(df_entry_raw, 2, student_number=student_number)

    else:

        print('Sub-Category Results NOT Updated')

    # Category Results Processed Successfully -> Add Total and Display
    if status:
      
        print('Category Results Updated')

        df_entry_raw = pd.DataFrame(data=
                                  {
                                      'Student Number': [student_number], 
                                      'Total': [total]
                                  }, 
                                  columns=['Student Number','Total'])

        # Results tab (tab 3) in the file
        status = ProcessResults(df_entry_raw, 3, transpose=False, has_student_number=True, student_number=student_number)

        if status:

            print('Total Updated')

        else:

            print('Total NOT Updated')

    else:

        print('Category Results NOT Updated')

For each student file, you’re going to want to execute ProcessStudentResults. Replace ‘<YOUR FOLDER ID> with the folder ID that contains all student rubrics:

# View all folders and file in your Google Drive
file_list = drive.ListFile({'q': "'<YOUR FOLDER ID>' in parents and trashed=false"}).GetList()

for file in file_list:
    
    if 'rubric' in file['title'].lower() and 'template' not in file['title'].lower(): 

        print(file['title'])

        ProcessStudentResults(file['title'])

        print('\n')

Did this work for you? Did you find any enhancements to the code that could be implemented? Feel free to fork the code on GitHub and make pull requests with any enhancements.

Want to visualise the data gathered from the rubrics? Here’s a blog post on how you can connect a Google Sheet to Power BI.

If you have any feedback, or if you got stuck somewhere, feel free to reach out via email (jacqui.jm77@gmail.com) or drop a comment below.

Featured

[Automation] Retrieve Power Automate Flow Run Logs and Visualise in Power BI

Level of Difficulty: Intermediate – Senior.

This post will focus on pulling the Power Automate flow run logs, using PowerShell, exporting them to a CSV file and visualising them in Power BI

What are the steps?

The steps that we will be following are:

  1. Setup PowerShell
  2. Get the data
  3. Visualise in Power BI

Setup PowerShell

You’ll need to install the PowerApps PowerShell modules which you could do by running the following commands as Administrator in PowerShell:

Install-Module -Name Microsoft.PowerApps.Administration.PowerShell
Install-Module -Name Microsoft.PowerApps.PowerShell -AllowClobber

Next, you’ll need to authenticate by running the Add-PowerAppsAccount command in PowerShell:

Add-PowerAppsAccount

If you’d like to execute the command without the prompt, you can use the following instead:

$pass = ConvertTo-SecureString "password" -AsPlainText -Force
Add-PowerAppsAccount -Username foo@bar.com -Password $pass

Get the Data

In order to find a flow ID (which you’ll need to export the logs, using the next query), you’ll need to dissect the URL. Navigate to the desired flow on the web version of Power Automate and retrieve the URL. This is what you’d like to navigate to, to get the URL break down below:

emea.flow.microsoft.com/manage/environments/Default-<EnvironmentID>/flows/<FlowID>/details?backUrl=%2Fflows%2F<FlowID>%2Fruns

From the above, you would like to copy the portion that would depict the Flow ID of the flow that you are looking at. Then, execute the following command in PowerShell, replacing <FlowID> with the ID you’ve retrieved:

Get-FlowRun <FlowID> | Export-Csv -Path '.\FlowExport.csv'

Visualise Data in Power BI

Open Power BI Desktop > Get Data > txt or csv. Then select the FlowExport.csv that was just generated from PowerShell. Then hit Transform Data and apply the following steps:

  1. Remove the top row (which contains TYPE System.Management.Automation.PSCustomObject)
  2. Use First Row as Headers
  3. Change StartTime column to Date/Time
  4. Split Internal by semi-colon, for each occurrence of the delimiter
  5. Remove all Internal columns EXCEPT Internal.2 (which should start with ‘id=/’)
  6. Rename Internal.2 to Internal
  7. Replace ‘id=/providers/Microsoft.ProcessSimple/‘ with ‘https://emea.flow.microsoft.com/manage/’ in Internal
    • This is so that more detail can be obtained on failed runs by navigating to the URL from the report

The dataset should look similar to this before hitting Close & Apply:

Now you can visualise your logs on Power BI, like by adding a Clustered Column Chart visualisation to the page:

Need help? Feel free to drop a comment below or reach out to me via email, jacqui.jm77@gmail.com

Featured

[RPA] Automating Azure DevOps Project, Repository and Work Items Creation through UiPath

Level of Difficulty: Beginner – Intermediate.

This post will assist in automating the creation of projects, repositories and work items in Azure DevOps through UiPath by using the Azure DevOps API.

There are multiple Azure DevOps libraries available on the UiPath Marketplace that make the API calls for you but the aim of this post is to enable developers through understanding how the Azure DevOps API works so that they can make calls that other existing libraries might not cater for.

The Azure DevOps API is documented here.

What are the steps?

The steps that we will be following are:

  1. Create an Azure DevOps Personal Access Token
  2. Create API Body Templates
  3. Create a ‘Create Project’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response
  4. Create a ‘Create Repository’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response
  5. Create a ‘Create Work Item’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response
  6. Create a ‘Link Work Item’ Workflow
    1. Prepare Template
    2. Make API Call
    3. Deserialise response

Deep Dive

Let’s dive deeper into the steps listed above.

Create an Azure DevOps Personal Access Token

Login to Azure DevOps, which you can navigate to from here.

In the top right hand corner of the screen, under User Settings you can access the Personal access tokens option.

Azure DevOps Personal Access Token Generation

Select New Token:

New Personal Access Token

Give your access token a name, select an organisation if you’d like to only grant permission to a specific organisation and set the expiration date of the token. Ensure that you update the personal access token once it has expired.

New Azure DevOps Personal Access Token

Once you’ve clicked Create, copy the personal access token somewhere safe as we will need it when making the API call.

Create API Body Templates

The templates may be stored in a ‘Templates‘ folder within the UiPath solution. We will need two templates, one for projects and one for repositories. These templates hold the JSON template that is needed for the API request.

The ‘new_project_template.json‘ file should contain the following:

{
	"name": "<project_name>",
	"description": "<project_description>",
	"capabilities": {
		"versioncontrol": {
		  "sourceControlType": "Git"
		},
		"processTemplate": {
		  "templateTypeId": "adcc42ab-9882-485e-a3ed-7678f01f66bc"
		}
	}
}

In UiPath we will need to replace ‘<project_name>‘ and ‘<project_description>‘.

The ‘new_repository_template.json‘ file should contain the following:

{
	"name": "<repository_name>",
	"project": {
		"id": "<project_id>"
	}
}

In UiPath we will need to replace ‘<repository_name>‘ and ‘<project_id>‘.

The ‘new_workitem_template.json‘ file should contain the following:

[
	{
		"op": "add",
		"path": "/fields/System.Title",
		"from": null, 
		"value": "<story>"
	},
	{
		"op": "add",
		"path": "/fields/System.AreaPath",
		"from": null, 
		"value": "<project>"
	},
	{
		"op": "add", 
		"path": "/fields/System.IterationPath", 
		"from": null, 
		"value": "<project>\\Iteration 1"
	}, 
	{
		"op": "add",
		"path": "/fields/System.State",
		"from": null,
		"value": "<story_state>"
	}
]

In UiPath we will need to replace ‘<story>‘, ‘<project>‘ and ‘<story_state>‘.

The ‘link_workitem_template.json‘ file should contain the following:

[
	{
		"op": "test",
		"path": "/rev",
		"value": 1
	},
	{
		"op": "add",
		"path": "/relations/-",
		"value": {
			"rel": "System.LinkTypes.Hierarchy-Reverse",
			"url": "https://dev.azure.com/<organisation>/_apis/wit/workItems/<story_id>",
			"attributes": {
				"comment": "<comment>"
			}
		}
	},
	{
		"op": "add",
		"path": "/fields/System.State",
		"from": null,
		"value": "<story_state>"
	}
]

In UiPath we will need to replace ‘<comment>‘, ‘<story_id>‘ and ‘<story_state>‘.

UiPath ‘Create Project’ Workflow

Create a folder in the UiPath solution folder named Projects and within the folder, create a ‘CreateProject.xaml‘ workflow:

Let’s start by adding a ‘Read Text File‘ activity and assigning a few variables:

The NewProjectURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/_apis/projects?api-version=6.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps.

Next, add a Multiple Assign activity to replace the project name and project description placeholder with the appropriate names:

Make sure that the UiPath.Web.Activities library is installed before trying to add the HTTP Request Activity:

The properties of the HTTP Request activity should include the following:

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Create Project’ sequence should look like this:

UiPath ‘Create Repository’ Workflow

Create a folder in the UiPath solution folder named Repositories and within the folder, create a ‘CreateRepository.xaml‘ workflow:

Let’s start by adding a ‘Read Text File‘ activity and assigning a few variables:

The NewRepositoryURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/” + ProjectID + “/_apis/git/repositories?api-version=6.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps and the Project ID should correspond to the ID of the project in Azure DevOps.

Next, add a Multiple Assign activity to replace the repository name and project ID placeholder with the appropriate names:

Make sure that the UiPath.Web.Activities library is installed before trying to add the HTTP Request Activity:

The properties of the HTTP Request activity should include the following:

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Create Repository’ sequence should look like this:

The process of creating the Work Items workflows is very similar to the two mentioned above. Therefore, I’ll go through the next two workflows in a little less detail.

UiPath ‘Create Work Item’ Workflow

Create a folder in the UiPath solution folder named Work Items and within the folder, create a ‘CreateWorkItem.xaml‘ workflow:

Let’s start by adding a ‘Read Text File‘ activity and assigning a few variables:

The NewWorkItemURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/” + ProjectID + “/_apis/wit/workitems/$” + Type + “?api-version=6.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps and the Project ID should correspond to the ID of the project in Azure DevOps.

Next, add a Multiple Assign activity to replace the story name, story state and project name placeholder with the appropriate names:

Similar to the Create Project and Create Repository sections above, add a HTTP request activity with the following properties:

Please note that the body format is application/json-patch+json.

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Create Work Items’ sequence should look like this:

UiPath ‘Link Work Item’ Workflow

Create a workflow named folder ‘LinkWorkItem.xaml‘ in the folder named Work Items. and add a ‘Read Text File‘ activity and assigning a few variables:

The LinkWorkItemURL above is assigned the value of https://dev.azure.com/&#8221; + Organisation + “/” + ProjectID + “/_apis/wit/workitems/” + TaskID + “?api-version=5.0”. The Organisation variable should correspond to your Organisation name in Azure DevOps and the Project ID should correspond to the ID of the project in Azure DevOps. The Task ID should be the ID of the Task you would like to link to the story.

Next, add a Multiple Assign activity to replace the story name, story state and organisation name placeholder with the appropriate names:

Similar to the Create Project and Create Repository sections above, add a HTTP request activity with the following properties:

Please note that the body format is application/json-patch+json.

Only the Personal Access Token is used for authentication as a password when using Basic Authentication meaning that the username can be assigned to literally anything and it will still authenticate, as long as the personal access token is valid.

Add the following added as headers:

Now add a ‘Deserialise JSON’ activity:

The ‘Link Work Items’ sequence should look like this:

The above solution is available on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

Featured

[RPA] Using SQL User-Defined Table Types in UiPath as a Component of a Data Exchange Architecture

Level of Difficulty: Intermediate – Senior.

This post will assist in understanding what SQL user-defined table types (UDTTs) are and how they can be used within UiPath as a part of a Data Exchange Architecture (DEA). Following explanations of UDTTs and DEA approaches, we will do a practical implementation walkthrough.

User-Defined Table Types (UDTTs)

User-defined table types are table schema definitions that may be created by users in order to store temporary data. In essence, this means that you can send a table through to a stored procedure as a parameter, the same way you would an integer or string value, without having to write the data to a table first. The benefit of this is that you don’t need to store data on a permanent medium in order to perform SQL queries on the data (like joins or upsert merges, for example).

Data Exchange Architecture (DEA)

A Data Exchange Architecture component, in this context, refers to the use of Extract Transform Load (ETL) or Extract Load Transform (ELT) in order to maintain a constant and usable state of reliable data at any point in time. This can be implemented in many different ways, depending on the existing data architecture within your environment.

One way of implementing ETL would be to create three database schemas, each responsible for its own phase of ETL:

  1. Landing: (Extract)
    • All raw data is landed in the relevant tables within this schema. No changes are to be made to this data (possibly done through a truncate and load process)
  2. Staging: (Transform)
    • Data from landing tables are used to populate staging tables and data is cleaned up (possibly done either using truncate and load or using upsert merge statements)
  3. Loading: (Load)
    • Data from staging tables are transformed according to the business rules and populated into the tables within this schema (usually done using upsert merge statements)

Another common approach is to combine Landing and Staging into one schema (usually Staging). The idea behind this approach is that the Staging tables should be the intermediate storage between data sources and data targets.

Use-Case

Let’s work through a practical example together. We have two datasets: an Excel file (which is a few hunderd records large) containing Azure Devops project information and a SQL table (which is a few thousand records large) containing additional project information, like who the project manager is, what the project budget is, etc. For the sake of context, not all of the projects are recorded in the Azure Devops dataset as not all projects are currently active. The SQL table contains historical project data too.

We DO want to:

We would like to join the data that exists within in Excel with the data in SQL in order to enrich our dataset needed in our UiPath process.

We DO NOT want to:

We do not want to query the Excel and SQL data into UiPath datatables and do the joins in UiPath due to the way that the UiPath “Join Datatables” activity iterates through each record in each datatable to find the join. Considering the amount of data that we are working with, the process would perform poorly in terms of time. SQL is really good at processing data quickly, so it would be a good idea to parse the Excel data into SQL but we don’t want to create a table for it if we don’t need to.

So we…

We do stuff… In SQL and in UiPath.

In SQL

We use UDTTs and stored procedures. We can create a table schema definition for the Excel file in SQL (ID, name, description and URL) as a type. The UDTT create statement would look something like:

CREATE TYPE [Project_UDTT] AS TABLE 
(
	[id] [uniqueidentifier] NOT NULL,
	[name] [varchar](255) NOT NULL,
	[description] [varchar](max) NULL,
	[url] [varchar](max) NOT NULL
);
GO

Next, we would need a stored procedure to parse the UDTT through and retrieve the joined results, which would look something like:

CREATE PROCEDURE [JoinProjectInformation_SP]
(
	@Projects [Project_UDTT] READONLY
)
AS
BEGIN
	SET NOCOUNT ON;

	SELECT		p.* -- all columns in @Projects
				,pi.* -- all columns in ProjectInformation

	FROM		@Projects p
	JOIN		ProjectInformation pi
	ON			p.id = pi.ProjectID
END;
GO

Now we go do more cool stuff in UiPath.

In UiPath

We will need to add an Execute Query activity (Execute Non-Query would work more or less the same, if that’s the sort of thing that you’re into) to the desired sequence. If you can’t find the Execute Query activity, make sure that you have the UiPath.Database.Activities package installed.

User-Defined Table Types (UDTTs) in UiPath

First, we need add the stored procedure name (“JoinProjectInformation_SP”) to the SQL input argument as well as change the CommandType to StoredProcedure. Assign the output Datatable argument to something along the lines of dtJoinedProjects.

Next, we need to add the results (of Datatable type) from “Get Projects” as a paramter to the stored procedure. We do that by configuring Parameters:

Ensure that the Parameter Name is the same as it is declared in the SQL stored procedure but without the “@” prefix. Once you’re done, select OK and run the workflow.

Did this work for you? Is there something you’d like to add? Drop a comment below or reach out to me jacqui.jm77@gmail.com.

Featured

[Automation] Monitoring Gmail Inbox Using Python

Level of Difficulty: Beginner – Senior.

Many automation solutions make use of the functionality provided by mail services as it serves as an important element that allows for communication between humans and the automation process. There are many benefits provided by using Google Mail (Gmail), one of which is cost – for that reason, this post will focus on providing a step-by-step guide of how to monitor emails coming into your Gmail inbox, with the ability to monitor specific labels.

It is also important to note that there are tools and platforms that make it much easier to perform these actions but as developers, we know that life cannot always be “easy”. This post aims at empowering the “not easy” solutions.

What are the steps?

The steps that we will be following are:

  1. Ensure that your Gmail account is configured correctly
  2. Import the libraries
  3. Gather variable values
    1. Email address
    2. Password
    3. Label
    4. Search Criteria
  4. Define methods
    1. Get Body
    2. Search
    3. Get Emails
    4. Authenticate
  5. Authenticate
  6. Extract mails
  7. Extract relevant information from results

Deep Dive

Let’s dive deeper into the steps listed above.

Ensure that your Gmail account is configured correctly

My first few attempts at this left me pulling my hair out with an “Invalid Credentials” error. Upon much Googling and further investigation, I found that it is caused by a Google Account setting. This is quite easily fixable.

In order to interact with my account, I had to allow less secure apps (you can access that setting here):

Allowing less secure apps to communicate with my Gmail account

If you are still experiencing problems, here is a more extensive list of troubleshooting tips.

Import the libraries

Now let’s move over to Python and start scripting!

First, let’s import the libraries that we’ll need:

import imaplib, email

Gather variable values

In order to access the mails from the Gmail account we will need to know the answers to the following questions:

  1. Which Google account (or email address) do we want to monitor?
  2. What is the password to the above account?
  3. What label do we want to monitor?
  4. What is the search criteria?

The best way to find out is to ask and luckily we can do that through code:

imap_url = 'imap.gmail.com' # This is static. We don't ask the questions we know the answer to

user = input("Please enter your email address: ")
password = input("Please enter your password: ")
label = input("Please enter the label that you'd like to search: ") # Example: Inbox or Social
search_criteria = input("Please enter the subject search criteria: ")

Define Methods

It becomes easier to break some of the reusable elements up into methods (or functions) so that larger implementations of this solution are equipped to be easily scalable. Stephen Covey teaches us that starting with the end in mind is one of the habits of highly effective people – some might even call it proactive design thinking. The point is that it is good to think ahead when developing a solution.

Enough rambling, here are the functions:

# Retrieves email content
def get_body(message): 
    if message.is_multipart(): 
        return get_body(message.get_payload(0)) 
    else: 
        return message.get_payload(None, True) 
    
# Search mailbox (or label) for a key value pair
def search(key, value, con):  
    result, data = con.search(None, key, '"{}"'.format(value)) 
    return data 
  
# Retrieve the list of emails that meet the search criteria
def get_emails(result_bytes): 
    messages = [] # all the email data are pushed inside an array 
    for num in result_bytes[0].split(): 
        typ, data = con.fetch(num, '(RFC822)') 
        messages.aplend(data) 
  
    return messages 

# Authenticate
def authenticate(imap_url, user, password, label):
    
    # SSL connnection with Gmail 
    con = imaplib.IMAP4_SSL(imap_url)  

    # Authenticate the user through login
    con.login(user, password)  

    # Search for mails under this label
    con.select(label)

Authenticate

Before we can extract mails, we first need to call the authenticate method that we had just created and pass through the answers to the questions we asked further up:

authenticate(imap_url, user, password, label)

Extract mails

Next, we need to call the search and get_mails methods to extract the mails:

# Retrieve mails
search_results = search('Subject', search_criteria, con)
messages = get_emails(searhc_results) 
  
# Uncomment to view the mail results
#print(message

Extract relevant information from results

Now, let’s work through the results and extract the subject using string manipulation. Feel free to add a “print(subject)” statement underneath the assignment of “subject” for debugging purposes:

for message in messages[::-1]:  
        
    for content in message: 
                       
        if type(content) is tuple:  
  
            # Encoding set as utf-8 
            decoded_content = str(content[1], 'utf-8')  
            data = str(decoded_content) 
            
            # Extracting the subject from the mail content
            subject = data.split('Subject: ')[1].split('Mime-Version')[0]
                                      
            # Handling errors related to unicodenecode 
            try:  
                indexstart = data.find("ltr") 
                
                data2 = data[indexstart + 5: len(data)] 
                
                indexend = data2.find("</div>") 
                
                # Uncomment to see what the content looks like
                #print(data2[0: indexend]) 
  
            except UnicodeEncodeError as e: 
                pass


Did this work for you? Feel free to drop a comment below or reach out to me through email, jacqui.jm77@gmail.com.

The full Python script is available on Github here.

Featured

[Automation] Working With The UiPath Automation Hub API And Postman (Then Getting The Code)

Level of Difficulty: Beginner – Senior.

Does your organisation use Automation Hub to capture and consolidate automation ideas and collateral? Have you ever wanted to interact with the data you have in Automation Hub in an automated manner? Well UiPath makes that easier now with the Automation Hub API – no more front-end automations needed to access your data.

Here’s how it works. If you’re looking for specific queries that aren’t covered in this blog post, checkout this Postman collection.

What are the steps?

The steps that we will be following are:

  1. Identifying the URL
  2. Compiling and populating the bearer token
  3. Adding the necessary headers
    1. x-ah-openapi-auth = openapi-token
    2. x-ah-openapi-app-key (only if you’ve assigned the app key when generating the token)
  4. Grabbing the results
  5. Generating the code for reuse in an automation

Deep Dive

Let’s dive deeper into the steps listed above.

Identify the URL

In order to identify which URL (or API Call) would achieve the task at hand, take a look at the different API Calls available here.

For the sake of this post, we are going to list all automations in our instance. Thus, we will be using the following API call:

https://automation-hub.uipath.com/api/v1/openapi/automations

Pop this URL into Postman:

Compile and Populate The Bearer Token

First thing’s first. Make sure Open API is set up on your tenant. You can do that as follows:

  1. Navigate to the Admin Console within Automation Hub
  2. Hover over Platform Setup
  3. Select Open API

Next, you’re going to want to hit Generate Token and enter the necessary details:

You’re also going to want to take note of your tenant ID because that’s what we are going to use to compile the Bearer Token:

The Bearer Token is made up by concatenating your tenant ID and your generated token separated by “/”.

An example is: 
46b6c342-3ab4-11e9-9c19-37a5980a67e8/ce91aa04-fc61-49e9-bec5-cb237efb4bda where:

  • 46b6c342-3ab4-11e9-9c19-37a5980a67e8 is the unique Tenant ID
  • ce91aa04-fc61-49e9-bec5-cb237efb4bda is the specific token generated for the user account

Add the Bearer Token under the “Authorization” tab in Postman with the Type set to “Bearer Token“:

Add Headers

If you have set up app key as an extra security measure when you generated the token, you’ll need to add “x-ah-openapi-app-key” to your headers and assign it to the value you created.

Regardless of whether you populated the app key or not, you’ll need to add “x-ah-openapi-auth” to your headers and assign it to “openapi-token“:

Grab The Response

Once you’ve hit send in Postman, you make a sacrifice to the universe and wait for your results which should look something liiiiikkkkeeee this:

Generate Code For Automation

Now that you’re getting results, you’ll most likely want to get this automated. Well then let’s get the code (for whatever language we want) from Postman.

Click on code in the top(ish) right-hand corner in Postman:

Select your language then copy and paste the code:

Did you get stuck anywhere? Was this helpful?

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

Featured

[Data Visualisation] Creating a Hamburger Menu for Filters in Power BI

Level of Difficulty: Beginner – Senior.

Often we create Power BI reports that require some sort of filtering mechanism but filters take up a lot of real-estate that could be used for other visuals instead. What if we could hide and show a filter pane using a “hamburger” menu mechanism?

We can and here’s how.

What are the steps?

The steps to using a hamburger menu to hide and show a filter pane in Power BI are:

  1. Create the report layout and set up the filters
  2. Add two hamburger menu images
  3. Create a “hide” bookmark
  4. Create a “show” bookmark
  5. Add the bookmarks to the actions of each image
  6. Align the two images

Create the report layout and set up the filters

To create the filters panel, add a rectangle (Home > Insert > Shapes) before adding and aligning the filter visuals on top of the rectangle.

An example would be something like this:

Add two hamburger menu images

Next we would want to add two hamburger menu icons (preferably SVG or PNG images with transparent backgrounds) next to one another (Home > Insert > Image).

Create a hide bookmark

In order to create a bookmark, you would need to ensure that the bookmark pane and selection pane are visible. You can do this by navigating to the View tab and ensuring that the Bookmarks Pane and the Selection Pane are both checked. This should allow you to see the following:

To create a hide bookmark you would need to hide all of the filters, the rectangle block and one of the hamburger menu images using the Selection Pane. To hide a visual (or an element), you can either click on the eye icon next to the visual in the selection pane or you can click on the element on the report and select hide on the selection pane.

Once all necessary visuals have been hidden, you should see this:

Next, you are going to want to bookmark the view that you are currently looking at by selecting “Add” in the Bookmarks Pane. This will result in “Bookmark 1” being created:

You can then rename the bookmark to “Hide Filters” by double clicking on the bookmark or by selecting the three dots next to the bookmark name (on the right) and selecting Rename:

Create a show bookmark

To create a “Show” bookmark, we are going to ensure that all of our filters are visible again:

Next we are going to hide the hamburger image that was visible in the “Hide” bookmark:

Then select “Add” in the Bookmark Pane and rename the bookmark to “Show Filters“:

Adding the bookmarks to the actions of each image

Now we need to add these bookmarks as actions to the correct hamburger image. Let’s start with the image that’s still visible. When we click that image, we are expecting our filters to Hide. So we want to link this image to the “Hide Filters” bookmark.

To do this, click on the image, navigate to the Format Image pane, ensure that Action is On (if it is Off, click on the round dot and it will turn on), expand Action, change Type to “Bookmark” and select the “Hide Filters” bookmark:

If you hover over the visible image, there should be a tool tip that appears:

If you hold Ctrl and click on the image, it will apply the bookmark and the filters (with its hamburger menu image) should disappear.

Now let’s repeat these steps for the image that is currently visible and assign the “Show Filters” bookmark to its Action:

Now you can lay the one hamburger image on top of the other the other so that they appear to be one image (you may need both images to be visible for this). Reorganise your report layout and play around with the other fancy things that you can do with bookmarks!

Just a note: It is possible to keep values of filters between bookmarks. It would require manipulation of bookmark properties. For this scenario, the data option would need to be deselected:

This report is available on GitHub, here.

If you got stuck or have feedback, please feel free to drop a comment below or reach out to jacqui.jm77@gmail.com for help!

Featured

[RPA] Uploading Files to Azure Blob Storage in UiPath Processes

Level of Difficulty: Intermediate – Senior.

This post will assist in using Azure Blob Storage to store files used in UiPath processes.

A funny thing happened the other day… Jackson broke the news to the team that his laptop was stolen. After some panic, he lets everyone know that at least the latest solution was checked into source control (and is available on the Orchestrator). That brings some relief, until Barbra asks if the look up files are in source control too. With a worried look on his face, Jackson says, “No… Neither are any of the reference files that are referenced in the queue item.”

After much excitement, Jeremy (the boss man), mandates a central storage repository to be implemented so that this doesn’t happen in the future and so that local development isn’t relied upon. After some investigation, the team decided that Azure Storage would be a great way to go as it fits in with the general architectural direction that the company is heading in. Here’s the question though, how?

What are the steps?

The steps that we will be following are:

  1. Create an App Registration
  2. Assign the correct API permissions
  3. Gather the necessary info
    1. Application ID
    2. Tenant ID
    3. Client Secret
  4. Create Azure Storage Resource
  5. Use the created resources in UiPath
    1. Add Azure Scope
    2. Add Get Storage Account Activity
    3. Add Get Storage Account Key Activity
    4. Add Get Blob Container
    5. Add Upload Blob

Deep Dive

Let’s dive deeper into the steps listed above.

Create an App Registration

Once you have logged into Azure (https://portal.azure.com/), the first thing you want to do is create an App in App Registration and you can do that by doing the following:

  1. Go to Azure Active Directory
  2. Go to App Registrations
  3. Click Register an application
  4. Provide a name and select authentication
  5. Hit Register

Next, you want to add the correct API permissions.

Assigning The Correct API Permissions

You will need to do the following to assign the correct API permissions:

  1. Inside the newly created App Registration, select API permissions
  2. Select Add a permission
  3. Add Azure Storage, user_impersonation

Now that you’ve got that set up, you want to get the Application ID, Tenant ID and Client Secret.

Gather The Necessary Info

In order to use the app that has just been registered, you’ll need to collect certain info. The info can be accessed as follows:

  1. Inside the newly created App Registration, select Overview
  2. Copy the Application (client) ID and the Directory (tenant) ID as it will be needed later
  3. Click on Client secrets, generate a secret
  4. Copy the secret and paste it somewhere for when you need it
  5. Click on Overview in the Azure Active Directory and copy the Tenant ID
  6. The Subscription ID will be visible in the Overview section of the Storage Account (once it has been created)

Now you should be ready to create your storage account.

Creating Azure Storage Resource

Hit Home in the top left hand corner and navigate to Resources.

  1. Hit new resource
  2. Search for Storage account
  3. Click Storage account – blob, file, table, queue
  4. Click Create
  5. Select Subscription and resource group (create one if necessary)
  6. Select storage account name, location, account kind (BlobStorage) and Replication (Read-access geo-redundant storage (RA-GRS))

Create Container

Great! Storage account created – now to create a container within the storage account:

  1. Open newly created Storage Account resource
  2. Click on Containers
  3. Click Add Container
  4. Give it a name and access level

Storage account created, container created, but how does the App Registration tie in? Well, we still need to add it, so let’s do that.

Give the App Registration access to the Storage Account

In order to give the Registered App access to the newly created Storage Account, do the following:

  1. Go to the Storage Account
  2. Select Access Control (IAM)
  3. Select Add, Role Assignment
  4. Assign Role to Storage Blob Data Contributor
  5. Search for the App Registration in the Select field
  6. Now add another Role and assign the App Registration to the Contributor Role

Now that we have everything set up in Azure, let’s get started in UiPath.

Using Azure Storage in UiPath

In order to use the resources created on Azure, the following needs to be done:

  1. Create a new process and give it a name
  2. Add the UiPath.Azure.Activities library
  3. Add the Azure Scope to the sequence
  4. Enter the following details gathered from above:
    1. ClientID
    2. ClientSecret
    3. SubscriptionID
    4. TenantID
  5. Add Get Storage Account Activity inside of the Azure Scope, with the following properties
    1. StorageAccountName
    2. ResourceGroupName
    3. Output StorageAccount to a variable
  6. Add Get Storage Account Key Activity below, with the following properties
    1. StorageAccount
    2. Output StorageAccountKey to a variable
  7. Add Get Blob Container with the following properties
    1. ContainerName
    2. StorageAccount
    3. StorageAccountKey
    4. Output BlobContainer to a variable
  8. Add Upload Blob from File with the following properties
    1. BlobContainer
    2. BlobName
    3. ContentType
    4. FilePath
    5. Type
    6. Output Blob to a variable

Now Jeremy, Jackson and the rest of the team can go about their business without worrying about stolen laptops and lost files.

The above solution is available on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

[Automation] Using Power Automate as a Data Integration Tool with Excel (for Bulk Processing)

Level: Beginner – Senior.

Power Automate connects into multiple different platforms and systems, making it a very powerful integration tool. Power Automate has been positioned by Microsoft as a platform that aims to facilitate Robotic Process Automation (RPA). Although there is still some work to be done before the Power Automate toolset can match competitors, it does make for a very Powerful integration tool.

For example, if you were to update an Excel spreadsheet by adding multiple lines (possibly updating certain lines with contents from SharePoint), it would take significantly more time than handing off the bulk of the processing to an API. This would require developing your own API (in the form of an Web API hosted as an Azure API Service or Azure Function).

For the purpose of this post, I will be demonstrating how to use Power Automate to integrate SharePoint with an Azure Function that does bulk processing. To set up the Azure Function that will handle the bulk process, check this post out.

What do we need?

To get this working we will need some sort of trigger, which we will make Manual for now, which you can replace with a SharePoint trigger, for example. We will need to make use of HTTP Request action and Create SharePoint File action. Please note that the HTTP Request action is a Premium connector.

Getting Started

Navigate to Power Automate and create a blank Instant cloud flow:

Name your flow then select Manually trigger flow before hitting Create.

Add an action and search for HTTP. Click on the HTTP action:

To configure the HTTP action, assign the HTTP method, the URL and the body:

Hit Save and then Test the flow manually:

You should expect to be seeing this level of gibberish in the body of the output once the test has run successfully (with status code 200):

If you received a 401, make sure you’ve added your authorisation headers or that your Function is set to Authorization.Anonymous (and not Authorization.Function). If you are receiving a 204 code, it means that no content is being returned, so make sure that you are able to read the file from Blob storage. That’ll be an issue on the Azure Function (or API) side of things.

We will need to check that the status code is 200 before adding the SharePoint create file action:

Now let’s configure this action with the Site URL, folder path, file name and file content. Due to the fact that we will be using the output body from the HTTP request to populate the SharePoint file content, we will need to add an apply to each loop that encloses the Create file action:

The expression to convert the output result to file content is:

base64ToBinary(items('Apply_to_each:_result_in_output_body'))

Now manually test the flow to see if your file creates correctly. You should expect to see the following:

Did this work for you? Feel free to leave a comment below.

[Automation] Retrieving a List of Mail Senders From a Shared Mailbox (Using Outlook and Python)

Level of Difficulty: Beginner – Senior.

Have you ever had a use case where you have a shared mailbox setup on your Outlook but you don’t have the credentials to the account behind the mailbox (if one exists)? Well you can get around that by running the Python script detailed in this post.

The approach to this solution is to use Outlook, which should already be set up on your computer, with Python to pull a list of email addresses of senders and write it to Excel.

For this, you’ll need to install the Pandas library, only if you want to write the addresses and display names to Excel:

import win32com.client
import pandas as pd

# Outlook Connection
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
folder = outlook.Folders.Item("@<Mailbox Name>")
inbox = folder.Folders.Item("Inbox")
messages = inbox.Items

# Initialise Lists
senders = []
addresses = []

# Iterate through messages for sender name and address
for message in messages:
    
    try:
        if message.Class==43:
            

            if message.SenderEmailType=='EX':
            
                if message.Sender.GetExchangeUser() != None:
                    addresses.append(message.Sender.GetExchangeUser().PrimarySmtpAddress)
                    
                else:
                    addresses.append(message.Sender.GetExchangeDistributionList().PrimarySmtpAddress)
            
            else:
                

                addresses.append(message.SenderEmailAddress)
                
            senders.append(message.Sender)
            
    except Exception as e:
        
        print(e)

# Create Excel file with results
df = pd.DataFrame()
df['Sender'] = senders
df['Address'] = addresses
df.to_excel('Addresses.xlsx')

Did this work for you? Feel free to drop a comment below with suggestions or questions.

[Azure] Creating an Azure Blob Storage Resource and Container

Level of Difficulty: Junior.

Do you have files that you would like to store in Azure? Azure Blob Storage is perfect for that. Here’s the 101 on Azure Blob Storage that will be covered in this post:

  1. Create a Resource Group
  2. Create an Azure Blob Storage Account Resource
  3. Create a Container
  4. Upload Files
  5. Retrieve Connection Strings and Keys

Let’s take a deep dive.

Create a Resource Group

First, you should create a resource group that will group any resources related to a specific project:

Create Azure Blob Storage Account

Now that we have a place to store all of our resources, let’s create some Blob Storage. Start by navigating to Home and selecting Create a resource:

Search for storage account and click on the suggested resource named Storage account that pops up right under the search box:

You should be navigated to the Storage Account resource with the option to Create the resource:

Ensure that the correct subscription and resource group are filled, along with the name of the storage account and the region, before clicking create + review:

Make sure you have taken note of the information in the Advanced, Networking, Data protection and Tags tabs, before confirming the creation of the resource.

Create Container and Upload File

Navigate to the newly created storage account, select Containers and create a new container. Give the container a Name before hitting Create:

Click on the newly created container and select Upload. Navigate to the file that you would like to upload and select Upload.

Your file should now be visible and ready for use:

Retrieve Connection String or Keys

Navigate back to the Storage Account resource (make sure you aren’t in a container) then click on Access keys to fid your keys and connection strings.

Did this work for you? Did you get stuck somewhere?

Feel free to leave a comment below if you did or if you have cool use cases you’d like to share.