[Automation]: Using Power Automate with SQL Server (Primary keys that aren’t int)

Level of Difficulty: Beginner – Senior.

Are you in a situation where you’re using a SQL database and you’ve already designed, created and gone through all 700 iterations of the review? Primary keys already created of type uniqueidentifier with default value of NEWID(). Then Fred comes with his clever idea of adding some sort of automation workflow on top of the database – you know, like Power Automate?

So you go down a tangent of getting info together around Power Automate triggers and actions, specific to SQL Server. You come across the two triggers: When an item is created (V2) and when an item is modified (V2).

You decide on using SQL Server “When an item is created (V2)” as a trigger and Office 365 Outlook “Send an Email (V2)” as an action.

You go into Power Automate and you connect it up to your database but when you hit test and “I’ll perform the trigger action”, you wait… And wait… And wait… ? … ? … And wait. Nothing’s happening?

You do more research and you find that most people use a primary key of type int and it works for them. Now you are running ahead with everything that needs to change in order to get this to work. You think of workarounds like creating views, using a different trigger, using a different automation type, heck… Even using a different workflow tool. But wait! That’s not necessary.

Here’s what to do:

  • Alter your table design in SQL Server Management Studio (SSMS)
  • Add another row, named something like “PowerAutomateID”
    • Make the type int
    • Untick “Allow Null”
    • Scroll down in the table properties and set Identity Specification to Yes
  • Save
  • Go test your flow

There’s no need to make the new column a Primary Key as the Power Automate trigger looks for the column that has Identity Specification set. I had a web form built on top of my database and none of the CRUD (create, read, update and delete) functionality had to be changed. It all continued to work properly, even the Power Automate process.

This is definitely something I wish I knew when I first went on this journey a few weeks ago.

If you hadn’t come across this in your research, you would most likely have loved to have landed on the Microsoft Docs that explain some of the limitations.

Still not working or may you found a better way? Drop a comment below or send an email through to jacqui.jm77@gmail.com

[Et al.] Look After The BAs!

As a Developer in an Agile environment, I want to appreciate the Business Analysts (BA) so that we can continue working in a respectful and collaborative manner.

Sounds like a user story right? Well maybe it should be so that we can persistently practise this principle. Often we are faced with feuds between colleagues in the tech industry (most other industries have this too) due to a difference in opinion or even work ethics. Sometimes personalities clash, the pressure and tension might get a little too much, sometimes requirements are nearly impossible to meet in the unrealistic timelines presented to the stakeholders and sometimes those timelines were decided upon without consulting the technical team.

All of the above can be considered valid points and sometimes they raise valid concerns. Granted, it is a two way street – it is important to maintain bidirectional respect between developers and BAs.

I used to be a developer who was quick to judge non-technical colleagues based on the textbook description of their job title and the reality of how they carry it out. I have recently started working with a team of BAs who have fundamentally changed the way I see non-technical colleagues (especially BAs) who set out to do the best job that they possibly can do, in-line with their job description!

That’s good and all but what is a BA really? A Business Analyst is an agent of change. They work together with organisations to help them improve their processes and systems. As developers and technical “resources” we tend to make their life a lot more complicated than that.

Those involved in Robotic Process Automation (RPA) projects, would understand why I would say that the bulk of the work lies in the process understanding phase which is generally conducted by the BA (and in some instances the Solution Architect).

I have found that developers often look down on BAs as many of them don’t possess the skill to write code. Now based on what I have experienced working with a team of highly talented BAs, I have come to realise that the greatest difference between BAs and the developers is that they communicate the same process in different ways (or languages).

Although a developer is able to translate a business requirement specification, process definition document or even a functional specification document into code, it is important to remember that it remains an in-depth depiction of the process outlined by the BA.

Okay cool… But how can I do my part as a developer? Good question! Here are a few things you should keep in mind when working with BAs:

  • We enable those who enable us. A good BA may come across as someone who tries to make your life easier when in actual fact, they are trying to simplify a process. Help your BA help you by providing assistance with some technical aspects of a situation if and where possible.
  • Appreciating and respecting anyone will generally result in improved efforts made on a project. It is a common reaction to respond more positively when feeling respected and appreciated.
  • Be patient. Many BAs may have an ambition to gather more technical understanding around certain things that you have insight over, however, they may not always understand your predicament. The best advice I have gotten over the last months is to ELI5 (Explain like I’m 5). It has helped me tremendously in translating the highly technical explanations in a way that removes frustration for myself (as the developer) and the BA(s). In the process of ELI5, enlighten and enable the BA to understand the situation for future reference. If explained well enough, once is enough and twice is a revision.
  • Learn. There is always an opportunity to learn new things. There’s nothing stopping the devs from understanding a BAs process or a BA understanding the devs process.
  • Stick together. I cannot emphasise how important this point is. Dealing with the business (internally or externally) is strenuous enough, dealing with tension and pressure internally can often cause conflict between colleagues too. Sticking together against stakeholders and sharing the pressure as a team helps keep the team positive and often removes some of the additional pressure. Sometimes it helps just knowing that you’re not in something alone.

What happens when you have a personality clash with the BA? Get a new one? No, it doesn’t always work like that. Like I have mentioned above (and will mention further down too), this remains a two way street. Sometimes it is a good idea to discuss an ideal working situation with the BA that you may be clashing with. It is important that all parties respect and participate in the above. The responsibility does not solely lie with the developer, however, a first step goes a long way!

Great… But how do you spark a great working relationship? Get your colleague a coffee, or tea, or hot chocolate… Or water. Or a bunch of sweets.

To the BAs and other non-techies / semi-techies out there, thank you for the work that you do! You are appreciated and hopefully your dev team will show their appreciation too. Feel free to reach out to them too. It’s a two way street.

Let’s take this opportunity to be better together!

[RPA]: Uploading Files to Azure Blob Storage in UiPath Processes

Level of Difficulty: Intermediate – Senior.

This post will assist in using Azure Blob Storage to store files used in UiPath processes.

A funny thing happened the other day… Jackson broke the news to the team that his laptop was stolen. After some panic, he lets everyone know that at least the latest solution was checked into source control (and is available on the Orchestrator). That brings some relief, until Barbra asks if the look up files are in source control too. With a worried look on his face, Jackson says, “No… Neither are any of the reference files that are referenced in the queue item.”

After much excitement, Jeremy (the boss man), mandates a central storage repository to be implemented so that this doesn’t happen in the future and so that local development isn’t relied upon. After some investigation, the team decided that Azure Storage would be a great way to go as it fits in with the general architectural direction that the company is heading in. Here’s the question though, how?

What are the steps?

The steps that we will be following are:

  1. Create an App Registration
  2. Assign the correct API permissions
  3. Gather the necessary info
    1. Application ID
    2. Tenant ID
    3. Client Secret
  4. Create Azure Storage Resource
  5. Use the created resources in UiPath
    1. Add Azure Scope
    2. Add Get Storage Account Activity
    3. Add Get Storage Account Key Activity
    4. Add Get Blob Container
    5. Add Upload Blob

Deep Dive

Let’s dive deeper into the steps listed above.

Create an App Registration

Once you have logged into Azure (https://portal.azure.com/), the first thing you want to do is create an App in App Registration and you can do that by doing the following:

  1. Go to Azure Active Directory
  2. Go to App Registrations
  3. Click Register an application
  4. Provide a name and select authentication
  5. Hit Register

Next, you want to add the correct API permissions.

Assigning The Correct API Permissions

You will need to do the following to assign the correct API permissions:

  1. Inside the newly created App Registration, select API permissions
  2. Select Add a permission
  3. Add Azure Storage, user_impersonation

Now that you’ve got that set up, you want to get the Application ID, Tenant ID and Client Secret.

Gather The Necessary Info

In order to use the app that has just been registered, you’ll need to collect certain info. The info can be accessed as follows:

  1. Inside the newly created App Registration, select Overview
  2. Copy the Application (client) ID and the Directory (tenant) ID as it will be needed later
  3. Click on Client secrets, generate a secret
  4. Copy the secret and paste it somewhere for when you need it
  5. Click on Overview in the Azure Active Directory and copy the Tenant ID
  6. The Subscription ID will be visible in the Overview section of the Storage Account (once it has been created)

Now you should be ready to create your storage account.

Creating Azure Storage Resource

Hit Home in the top left hand corner and navigate to Resources.

  1. Hit new resource
  2. Search for Storage account
  3. Click Storage account – blob, file, table, queue
  4. Click Create
  5. Select Subscription and resource group (create one if necessary)
  6. Select storage account name, location, account kind (BlobStorage) and Replication (Read-access geo-redundant storage (RA-GRS))

Create Container

Great! Storage account created – now to create a container within the storage account:

  1. Open newly created Storage Account resource
  2. Click on Containers
  3. Click Add Container
  4. Give it a name and access level

Storage account created, container created, but how does the App Registration tie in? Well, we still need to add it, so let’s do that.

Give the App Registration access to the Storage Account

In order to give the Registered App access to the newly created Storage Account, do the following:

  1. Go to the Storage Account
  2. Select Access Control (IAM)
  3. Select Add, Role Assignment
  4. Assign Role to Storage Blob Data Contributor
  5. Search for the App Registration in the Select field
  6. Now add another Role and assign the App Registration to the Contributor Role

Now that we have everything set up in Azure, let’s get started in UiPath.

Using Azure Storage in UiPath

In order to use the resources created on Azure, the following needs to be done:

  1. Create a new process and give it a name
  2. Add the UiPath.Azure.Activities library
  3. Add the Azure Scope to the sequence
  4. Enter the following details gathered from above:
    1. ClientID
    2. ClientSecret
    3. SubscriptionID
    4. TenantID
  5. Add Get Storage Account Activity inside of the Azure Scope, with the following properties
    1. StorageAccountName
    2. ResourceGroupName
    3. Output StorageAccount to a variable
  6. Add Get Storage Account Key Activity below, with the following properties
    1. StorageAccount
    2. Output StorageAccountKey to a variable
  7. Add Get Blob Container with the following properties
    1. ContainerName
    2. StorageAccount
    3. StorageAccountKey
    4. Output BlobContainer to a variable
  8. Add Upload Blob from File with the following properties
    1. BlobContainer
    2. BlobName
    3. ContentType
    4. FilePath
    5. Type
    6. Output Blob to a variable

Now Jeremy, Jackson and the rest of the team can go about their business without worrying about stolen laptops and lost files.

The above solution is available on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

[Python]: Auto-Format Code According to PEP 8

Level of Difficulty: Beginner to Senior.

So you’ve written a Python Script, Kevin has just told your team that all code needs to be reviewed and part of the team’s Key Performance Indicators (KPIs) will be PEP 8 compliance.

Now there’s a slight sense of chaos and stress because “What is PEP 8?! Can one eat it? Is it something that we take to retro?” Mark has a minor panic attack because, not only does he not know what PEP 8 is, but how on earth will he enforce it? Joan is passive but has reservations because the “rather complex” script has already been written, how will she refactor it all in time for her 3PM appointment with Call of Duty?

Don’t worry Joan (and team)… We’ve got you covered! It’s rather important to know what PEP 8 is but as for enforcing it, there are libraries that will assist in doing it for you.

But wait… There’s more, Kevin might even use one of these libraries to check if PEP 8 has been adhered to. Here’s how with the Black and\or autopep8 libraries.

What are the steps?

The steps that we will be following are:

  1. Find out what PEP 8 is
  2. Create the Python script
  3. Decide which library to use (black and\or autopep8) and install it
  4. Format your script

Deep Dive

Let’s dive deeper into the steps listed above.

What’s PEP 8?

PEP 8 is a set of standards and coding styling guides for Python code. Keeping code consistent and in accordance to a set of standards makes code easy to debug and maintain.

Let’s check it out in action.

Create The Python Script

Mark would like to experiment with this “Auto Black 8 and PEP” concept, so he creates a basic python script and saves it to C:\Python Scripts\PEPTest.py:

import sys, os
# try to print the current Python interpreter path and current working directory path
def print_paths():
	try:
		print(sys.executable)
		print(os.getcwd())
	except Exception as e:
		print(e)
print_paths()

Install Selected Library

Joan explains to Mark that he should definitely check this blog post out, then test the capability of black before testing autopep8 and deciding which one he would like to use. So off Mark goes to follow Joan’s advice.

Black

Mark installs the black library using the following command from command line:

py -m pip install black --user

Now that black has been installed, Mark runs the following command to see which commands he can use with black:

py -m black -h

The following catch Mark’s eye:

–check Don’t write the files back, just return the status.
–diff Don’t write the files back, just output a diff for each file on stdout.

Mark decides to check if there would be changes if he were to use black by using the –check argument:

py -m black PEPTest.py --check

Mark sees the following in the command line:

Now that he knows his code will be formatted, he wants to see how they would be formatted so he runs the following command:

py -m black PEPTest.py --diff

Mark sees the following in the command line:

Mark is experiencing excitement and tingles in places. Now he wants to see if this will actually change his code, so he runs the following code:

py -m black PEPTest.py

He opens up the file in Notepad and sees that his code has been formatted!

Now he wants to see if autopep8 does the same.

Autopep8

Mark replaces the formatted code with the following:

import sys,os
# try to print the current Python interpreter path and current working directory path
def print_paths():
    try:
        print(sys.executable)
        print(os.getcwd())
    except Exception as e:
        print(e)
print_paths()

Mark installs the autopep8 library using the following command from command line:

py -m pip install autopep8 --user

Now that black has been installed, Mark runs the following command to see which commands he can use with black:

py -m autopep8 -h

The following catch Mark’s eye:

–aggressive enable non-whitespace changes; multiple -a result in more aggressive changes
–in-place make changes to files in place
–diff print the diff for the fixed source

Mark decides to see if there would be changes if he were to use black by using the –diff argument:

py -m autopep8 PEPTest.py --diff

Mark sees the following in the command line:

Now he wants to see if this will actually change his and how “aggressive” it really is, so he runs the following code:

py -m autopep8 PEPTest.py --aggressive

For giggles, he adds –in-place to see what happens. He sees the following:

He opens up the file in Notepad and sees that his code has been formatted!

In the meantime, Joan had run through the same steps and after more confederating around the coffee machine, they came to a consensus that the “aggressive” use of autopep8 seems to be more beneficial to the team and as such, that’s what they would use to format their code before sending it for review.

Kevin decided he would stick to black for code review. Jokes on him right?

Wrong? Drop your comment below or reach out to jacqui.jm77@gmail.com

[Python]: Setting up alerts on SQL Server Job Agent failures

Level of Difficulty: Beginner to Intermediate.

So you’ve found yourself in a semi-ops situation where you need to do daily checks on SQL Server Agent Jobs because the client, Steve, has concerns that his data is not updating.

Now you have to go through the manual process of opening SQL Server Management Studio then logging into the server, clicking on the SQL Server Agent and actually checking the status of the jobs every so often when, in reality, you could just get notified when something fails. Sounds like a good idea right? Here’s how to do it!

What are the steps?

The steps that we will be following are:

  1. Create the Python script
    1. Import Libraries
    2. Enter server details
    3. Select and configure a database engine
    4. Get jobs with their statuses
    5. Filter for failures that occurred today
  2. Set up alerting
  3. Schedule Python

Deep Dive

Let’s dive deeper into the steps listed above.

Create The Python Script

Create a Python script named JobMonitoring.py.

Ensure that the pandas and either pyodbc (ensure sqlalchemy is also installed if you are using pyodbc) or pymssql libraries are installed. You can install these libraries by executing the following commands:

Using pymssql

pip install pandas
pip install pymssql

Using pyodbc

pip install pandas
pip install pyodbc
pip install sqlalchemy

Import libraries

Use the code below to import the correct libraries if you are using pyodbc:

import pandas as pd
import pymssql
import pyodbc
from sqlalchemy import create_engine

Use the code below to import the correct libraries if you are using pymssql:

import pandas as pd
import pymssql

Enter Server Details

Add the necessary information to the JobMonitoring.py script or read from a config file (recommended). Ensure that the database is set to read from msdb, unless there’s a stored procedure that lives elsewhere:

server = ''
user = ''
password = ''
database = 'msdb'

Configure Database Engine

If you are using pyodbc, configure the server using the following code:

params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(driver, server, database, user, password))
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
conn = create_engine(conn_str)

If you are using pyodbc, configure the server using the following code:

conn = pymssql.connect(server, user, password, database)

If you are using Windows authentication to get into the SQL Server, remove user and password from either of the above methods.

Get Jobs with Statuses

For this portion of the code, we will be relying on one of the functions built into the msdb, sp_help_jobactivity. The following code needs to be added to read the results into a pandas dataframe:

df = pd.read_sql('EXEC msdb.dbo.sp_help_jobactivity;', conn)

Are you facing permission challenges?

Ask the admin of the database to create a stored procedure for you to run and replace the msdb.dbo.sp_help_jobactivity in the code above with SQLAgentJobHistory. The stored procedure should look as follows:

CREATE PROC SQLAgentJobHistory
AS
BEGIN

	EXEC msdb.dbo.sp_help_jobactivity; 

END;

Filter for Failures that Occurred Today

Back in JobMonitoring.py, add the following code to apply the filters:

# order by last_executed_step_date
df['last_executed_step_date'] = pd.to_datetime(df['last_executed_step_date']).apply(lambda x: x.date())
df['last_executed_step_date'] = df['last_executed_step_date'].astype(str)

# create a dataframe that contains jobs executed today
df_today = df[df['last_executed_step_date'] == datetime.today().strftime("%Y-%m-%d")]

# create a dataframe that contains the jobs that have failed
df_failed = df_today[df_today['run_status'] == 0]

Setup Alerting

Inside of JobMonitoring.py, create a method named alert:

def alert(job_name, last_run):
    
    # add method of alert, ie. MS Teams, Slack, API call, etc.
    print('{} failed at {}'.format(job_name, last_run))

Check out this repo for ways of incorporating alerting via certain instant messaging platforms.

Add the following code to call the alert method that was just created:

if len(df_failed) > 0:
    
    for index, element in df_failed.iterrows():
        
        alert(element['job_name'], element['last_executed_step_date'])

Schedule Python Script

Let’s schedule this script to run on your local PC or server, using Task Scheduler.

  1. Hit Start and search for Task Scheduler
  2. Under action items, hit Create Task…
  3. Give your task a name and description
  4. Navigate to the Triggers tab and hit New…
  5. Customise the schedule as necessary and make sure that Enabled is checked
  6. Navigate to the Actions tab and click on New…
  7. Assign the Program/Script to your default interpreter, here’s how to find your interpreter
  8. Grab the path of the JobMonitoring.py script and paste it in Arguments
  9. Run through the settings in the rest of the tabs to see if there’s anything else you’d like to customise to your need.
  10. Select OK
  11. Now right click on the job in Task Scheduler and select Run

The above code is available in this Github repo.

Consider your job scheduled! Did you get stuck somewhere or is there anything you want to add? Drop a comment or send your question/ suggestion through to jacqui.jm77@gmail.com

[Python]: The 5WH of Getting Started with Python

Level of Difficulty: Beginner.

You’ve decided to expand your knowledge by learning the basics of Python, but you find yourself with the 5WH in mind, asking the following questions:

  • Who uses Python?
  • What would you use to code Python?
  • Where to use Python
  • When to use Python?
  • Why would you use Python?
  • How to get started with Python?

These are all valid questions and here are the answers.

Who uses Python?

Python is used by people who are looking to write basic scripts, all the way through to web developers who want to write a website, data engineers who need to wrangle data and even data scientists who build models. Some even use it as an introduction into learning how to program.

What would you use Python with?

Generally, the tools we use to code in a specific language are known as Integrated Development Environments (IDEs) and there are a few available for Python. The most common ones are:

My favorites include Jupyter Notebook, PyCharm Community and using Python through command line.

Where, when and why to use Python?

Python can do almost anything that any other technology stack can do. Maybe not as elegantly, but the capability is definitely there. So my answer to this question is that you use Python whenever and wherever you need it and feel comfortable implementing it.

How do you determine whether or not you’re comfortable with Python, you might ask? Well, are you confident that your solution is modular enough to be expanded upon, tests will be passed, will continuous integration and continuous deployment be possible and is there enough logging to debug easily if something goes wrong after deployment to production?

If the answer is no, that’s completely okay, that indicates room for growth.

How to get started Python?

A few “cool-things-to-know” when learning Python are:

  1. You should probably pick an IDE and install it (along with Python)
  2. print(‘Hello World’) or print(“Hello World”) – same same but different
  3. Basic Python Structure and Syntax
  4. Navigating through directories and working with files
  5. Working with pandas and data frames
  6. Visualising data using Pandas
  7. CSV to JSON manipulation
  8. Working with API Calls (SOAP and REST with or without authentication)
  9. Basic logging to file
  10. Working with SQL Databases

The basic understanding of the above ten points have shaped my Python journey and what a journey it is turning out to be.

Any more points you feel should be included in the list above? Drop a comment below or pop it through to jacqui.jm77@gmail.com

[Python]: Using Python Through Command Line

Level of Difficulty: Beginner.

Sandra has asked you if you can give her a crash course on how to use Python through the command line. So you decide to show her how to run a few Python commands and maybe install a library. Here’s what you can do.

What are the steps?

The steps that we will be following are:

  1. Finding the appropriate Python interpreter command
  2. Running test commands
    1. Create a new Python script
    2. Run the Python script
  3. Install a library

Deep Dive

Let’s dive deeper into the steps listed above.

Finding The Appropriate Python Interpreter Command

Open up command line prompt by hitting the Windows key + r. This will open the run pop up.

In here, type cmd and hit enter. This will open your command line prompt.

Try one of the following commands until you find one that successfully opens Python in the command line for you:

  • python
  • python3
  • py
  • python.exe (if you’re executing from a virtual environment)

In my case, py is the command that instantiates Python in command line, as seen above. Now run the exit() command to exit Python so that we can create the Python script

Running Test Commands

Let’s create a basic print(“Hello Universe”) Python script named TestScript.py by using the following command:

echo print('Hello Universe') > TestScript.py

You should be able to see the newly created file if you run the following command:

dir *.py

Let’s check what happens if we try executing this Python script using by running: (please note that if py did not work for you, use the one that works instead)

py TestScript.py

You should see the following:

Let’s say Sandra asks how to edit the Python code an IDE, run the following command:

TestScript.py

This should pop the code open in the default Python IDE: (mine is PyCharm)

Installing a Library

Now let’s test pip installing a library using the following command: (using the numpy library as an example)

py -m pip install numpy

You’d ideally like to see an indicator that the install was successful. Are you seeing the following?

If you are seeing the same as me, try the command below:

py -m pip install numpy --user

You should now be seeing the following:

After this, Sandra should know enough to get her through the basics of using Python through Command Line

Got stuck somewhere? Want to give some feedback? Please leave a comment below or reach out to jacqui.jm77@gmail.com

[Python]: DIY Usage Monitoring with Python

Level of Difficulty: Beginner to Intermediate.

This blog post is aimed at providing basic (automated) server (or computer) monitoring by using a Python script that can write to a SQL Server database or to a CSV.

It is important to note that there are various platforms that allow for advanced server monitoring along with a mature system, purchasable for a “mature” price. This post is an introduction to how basic usage monitoring may be done through the use of Python libraries and functionality.

What are the steps?

The steps that we will be following are:

  1. Create the Python script
    1. Configure data source connection (SQL Server or CSV storage)
    2. Import necessary libraries (based on data source)
    3. Tweak usage metrics to be monitored
    4. Tweak any alerting (if and where necessary)

Deep Dive

Let’s dive deeper into the steps listed above.

Create The Python Script – CSV Integration

Ensure that the pandas and psutil libraries are installed. You can install these libraries by executing the following commands:

pip install pandas
pip install psutil

Import libraries

Use the code below to import the correct libraries:

from datetime import datetime
from subprocess import call

import pandas as pd
import psutil
import numpy as np

Create Record Metrics Method – Written to CSV

The code that we will be using in this post will monitor the following metrics:

  • CPU percentage
  • Virtual memory
  • Drive usage
  • Last Booted Time

Other metrics that can be monitored using the psutil library include:

  • Disks
  • Network
  • Sensors
  • Other system info
  • Process management
  • Windows services

Examples of the above mentioned functionality can be found here.

Use and adapt the code below as you see fit, in order to create a method with functionality that records the following metrics:

def RecordMetrics():

    # gets the disk partitions in order to get a list of NFTS drives
    drps = psutil.disk_partitions()
    drives = [dp.device for dp in drps if dp.fstype == 'NTFS']
    
    # reads the existing csv
    df_csv = pd.read_csv('ServerMonitoring.csv')
    
    # assigns a new history ID for data integrity purposes
    history_id = max(df_csv['HistoryID']) + 1    

    # initialises the data frame with the appropriate values
    df = pd.DataFrame(
        {
            'CPU': [psutil.cpu_percent()],
            'VirtualMemory': [psutil.virtual_memory()[2]],
            'HistoryID' : [history_id],
            'LastBootedAt' : datetime.fromtimestamp(psutil.boot_time()).strftime('%Y-%m-%d %H:%M:%S')
            
        })

    # records the drive usage for each drive found
    for drive in drives:
        df['{}_DriveUsage'.format(drive.replace(":\\",""))] = psutil.disk_usage(drive)[3]

    # adds the current date and time stamp
    df['LoadDate'] = datetime.now()
    
    # appends the data to the existing csv file
    df = df.append(df_csv)
    df.to_csv('ServerMonitoring.csv', index=False)

Create The Python Script – Database Integration

Ensure that the following are installed:

  • urllib
  • pandas
  • sqlalchemy
  • psutil
  • pyodbc

You can install these libraries by executing the following commands:

pip install urllib
pip install psutil
pip install pandas
pip install sqlalchemy
pip install pyodbc

Import libraries

Use the code below to import the correct libraries:

from sqlalchemy import create_engine, event
from datetime import datetime
from subprocess import call

import urllib
import pandas as pd
import psutil
import pyodbc
import numpy as np

Create Record Metrics Method – Written to Database

The code that we will be using in this post will monitor the following metrics:

  • CPU percentage
  • Virtual memory
  • Drive usage
  • Last Booted Time

Other metrics that can be monitored using the psutil library include:

  • Disks
  • Network
  • Sensors
  • Other system info
  • Process management
  • Windows services

Examples of the above mentioned functionality can be found here.

Use and adapt the code below as you see fit, in order to create a method with functionality that records the following metrics:

def RecordMetrics():

    # dataabase connection - assign to the correct values
    driver = '{SQL Server}'
    server = ''
    user = ''
    password = ''
    database = ''
    table = 'ServerMonitoring'

    params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(driver, server, database, user, password))
    conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
    engine = create_engine(conn_str)

    # gets the disk partitions in order to get a list of NFTS drives
    drps = psutil.disk_partitions()
    drives = [dp.device for dp in drps if dp.fstype == 'NTFS']

    # initialises the data frame with the appropriate values
    df = pd.DataFrame(
        {
            'CPU': [psutil.cpu_percent()],
            'VirtualMemory': [psutil.virtual_memory()[2]],
            'LastBootedAt' : datetime.fromtimestamp(psutil.boot_time()).strftime('%Y-%m-%d %H:%M:%S')
        })

    # records the drive usage for each drive found
    for drive in drives:
        df['{}_DriveUsage'.format(drive.replace(":\\",""))] = psutil.disk_usage(drive)[3]

    # adds the current date and time stamp
    df['LoadDate'] = datetime.now()
    
    #if_exists="replace" if the table does not yet exist, then add HistoryID (or ID) as the auto-incremented primary key
    df.to_sql(table, engine, if_exists="append", index=False)

Create CPU Alerting Method – Database Integration

The method below pulls the last five records from SQL, gets an average and based on the average, makes a decision. The tolerance may be set accordingly and the print may be replace with instant message notifications (available here):

def CheckCPU():
    
    df_CPU = df.read_sql('SELECT TOP(5) CPU FROM dbo.ServerMonitoring ORDER BY LoadDate DESC', conn_str)
    
    avg_CPU = df_CPU['CPU'].mean()
    
    # if the CPU Average is above 90%
    if avg_CPU > 90:
        
        print('High CPU Usage Detected: {}%'.format(round(avg_CPU,2)))       

Create Boot Time Alerting Method – Database Integration

The method below pulls the last boot time record from SQL, and makes a decision accordingly. The tolerances may be set accordingly and the print may be replace with instant message notifications (available here):

def CheckBootTime():
    
    df_CPU = df.read_sql('SELECT TOP(1) LastBootedAt FROM dbo.ServerMonitoring ORDER BY LoadDate DESC', conn_str)
    
    current_timestamp = datetime.now()
    
    up_time = current_timestamp - boot_time
    
    days = str(up_time).split(" days, ")[0]
    time = str(up_time).split(" days, ")[1]

    hours = time.split(':')[0]
    mins = time.split(':')[1]
    secs = time.split(':')[2]

    # if PC last rebooted in the last 9 hours
    if days == 0 and hours < 10:

        print('Server was rebooted {} hours and {} minutes ago'.format(hours, minutes))

That’s it folks! The Github repo is open for contribution so please do feel free to contribute or leave a comment below on how your solution turns out!

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

[Python]: Finding the Python Interpreter Path

Level of Difficulty: Beginner.

Are you having issues pip installing libraries to the right place? Could it be because you have more than one interpreter installed? Well… How many Python interpreters do you have installed?

The standard IDLE interpreter – the one that comes with the Python installation? The one that was installed when you were playing around with Visual Studios? No, no, no… It was VS Code! The one that was installed when you downloaded Anaconda?

Well… Now you no longer need to get bitten by that snake when untangling where you’re executing your Python code from.

Find the Python Interpreter Path (in 15 seconds or less)

Here’s what to do…

  1. Create a test script that will execute on the same interface you’re having issues with (PyCharm, Jupyter Notebook, IDLE, command prompt, Visual Studios, VS Code, etc.)
  2. Run the following code:
import sys

print(sys.executable)

Now you know which interpreter to use to pip install your libraries.

Still having issues? Drop a comment below or reach out personally – jacqui.jm77@gmail.com

[RPA]: Using The Microsoft Office 365 Activity For Sending Emails in UiPath

Level of Difficulty: Beginner.

This post will assist in using the Microsoft Office 365 Activity, specifically for sending emails with attachments, in UiPath.

What are the steps?

The steps that we will be following are:

  1. Follow the link on how to create an App Registration on Azure
  2. Get the Application ID and Tenant ID from Azure
  3. Install the UiPath.MicrosoftOffice365.Activities library
  4. Add a “Microsoft Office 365 Scope” Activity
    1. Configure the “Microsoft Office 365 Scope” activity
  5. Add a “Send Mail” Activity
    1. Configure the activity
  6. Add an “Assign” Activity
    1. Assign the files in the directory to a variable
  7. Add an “Assign” Activity
    1. Assign the converted attachments variable to IEnumerable<String>
  8. Add IEnumerable<String> variable to the “Send Mail” activity’s AttachmentCollections property

Deep Dive

Let’s dive deeper into the steps listed above.

Getting the Application ID From Azure

Once you have logged into Azure (https://portal.azure.com/), go into the Azure Active Directory:

Select App registrations and then search for your application. Click on your application when it appears.

Copy the Application ID and the Tenant ID and keep it for later use, when configuring the Microsoft Office 365 Scope:

Now that we have our info from Azure, let’s get started in UiPath.

Install the UiPath.MicrosoftOffice365.Activities library

Create a new UiPath Process, add a sequence to the Main.xaml file and click “Manage Packages” as seen below:

Ensure that focus is on “All Packages” when searching for UiPath.MicrosoftOffice365.Activities.

Select the UiPath.MicrosoftOffice365.Activities library and click on the Install button. After clicking the install button, click on Save and Accept when prompted:

Now that we have the library installed, we can wire our process up to use the MS O365 capability.

Add a “Microsoft Office 365 Scope” Activity

Search for “Microsoft” in the Activities panel and drag the “Microsoft Office 365 Scope” over into the Sequence sitting in the Main.xaml file:

Configure “Microsoft Office 365 Scope” Activity

Ensure that you configure the following properties correctly:

  • Application ID: Application ID obtained from Azure
  • Authentication: Interactive token should be the default and easiest to use, however, it would depend on the security level that is applied to your user on the app registration
  • Services: For sending and reading mails, the Mail service should be selected
  • Tenant ID: Tenant ID obtained from Azure

In the activity pane, find the “Send Mail” activity (Integrations > Office 365 > Outlook) and drag it into the Microsoft Office 365 Scope:

Configure the Send Mail properties as follows:

  • Body: The text that should be included in the email body
  • Subject: The subject that should appear in the email subject
  • From: The email address that the mail should be sent from
  • To: The email address (or email addresses) that the mail should be sent to
  • AttachmentsCollection:
    • {} if there are no attachments
    • IEnumerable String variable that contains the paths of the attachments if there are attachments

The other properties may be configured and adjusted as necessary.

Adding Attachments From Folder

In a case where there are files in a folder that need to be attached, add an “Assign” activity. Create a variable (of Type String[]) named attachments and use the assign activity to assign it to System.IO.Directory.GetFiles(“\Attachments”)

Since the Send Mail activity accepts the attachments collection as the IEnumerable<String> type, the attachments variable needs to be converted from an array of string to an IEnumerable of string. This can be done by grabbing another “Assign” activity and using it to assign a new variable, named attachmentEnumerable (of type IEnumerable<String>) to attachments.AsEnumerable().

Now add the attachmentEnumerable variable to the AttachmentsCollection property of the “Send Mail” activity.

The final process should look like this:

And that’s it folks! Give it a run and feel free to drop a comment about how it turns out.

The above solution is available at on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com