Featured

[Power BI] Creating a Hamburger Menu for Filters in Power BI

Level of Difficulty: Beginner – Senior.


Often we create Power BI reports that require some sort of filtering mechanism but filters take up a lot of real-estate that could be used for other visuals instead. What if we could hide and show a filter pane using a “hamburger” menu mechanism?

We can and here’s how.

What are the steps?

The steps to using a hamburger menu to hide and show a filter pane in Power BI are:

  1. Create the report layout and set up the filters
  2. Add two hamburger menu images
  3. Create a “hide” bookmark
  4. Create a “show” bookmark
  5. Add the bookmarks to the actions of each image
  6. Align the two images

Create the report layout and set up the filters

To create the filters panel, add a rectangle (Home > Insert > Shapes) before adding and aligning the filter visuals on top of the rectangle.

An example would be something like this:

Add two hamburger menu images

Next we would want to add two hamburger menu icons (preferably SVG or PNG images with transparent backgrounds) next to one another (Home > Insert > Image).

Create a hide bookmark

In order to create a bookmark, you would need to ensure that the bookmark pane and selection pane are visible. You can do this by navigating to the View tab and ensuring that the Bookmarks Pane and the Selection Pane are both checked. This should allow you to see the following:

To create a hide bookmark you would need to hide all of the filters, the rectangle block and one of the hamburger menu images using the Selection Pane. To hide a visual (or an element), you can either click on the eye icon next to the visual in the selection pane or you can click on the element on the report and select hide on the selection pane.

Once all necessary visuals have been hidden, you should see this:

Next, you are going to want to bookmark the view that you are currently looking at by selecting “Add” in the Bookmarks Pane. This will result in “Bookmark 1” being created:

You can then rename the bookmark to “Hide Filters” by double clicking on the bookmark or by selecting the three dots next to the bookmark name (on the right) and selecting Rename:

Create a show bookmark

To create a “Show” bookmark, we are going to ensure that all of our filters are visible again:

Next we are going to hide the hamburger image that was visible in the “Hide” bookmark:

Then select “Add” in the Bookmark Pane and rename the bookmark to “Show Filters“:

Adding the bookmarks to the actions of each image

Now we need to add these bookmarks as actions to the correct hamburger image. Let’s start with the image that’s still visible. When we click that image, we are expecting our filters to Hide. So we want to link this image to the “Hide Filters” bookmark.

To do this, click on the image, navigate to the Format Image pane, ensure that Action is On (if it is Off, click on the round dot and it will turn on), expand Action, change Type to “Bookmark” and select the “Hide Filters” bookmark:

If you hover over the visible image, there should be a tool tip that appears:

If you hold Ctrl and click on the image, it will apply the bookmark and the filters (with its hamburger menu image) should disappear.

Now let’s repeat these steps for the image that is currently visible and assign the “Show Filters” bookmark to its Action:

Now you can lay the one hamburger image on top of the other the other so that they appear to be one image (you may need both images to be visible for this). Reorganise your report layout and play around with the other fancy things that you can do with bookmarks!

Just a note: It is possible to keep values of filters between bookmarks. It would require manipulation of bookmark properties. For this scenario, the data option would need to be deselected:

This report is available on GitHub, here.

If you got stuck or have feedback, please feel free to drop a comment below or reach out to jacqui.jm77@gmail.com for help!

Featured

[RPA]: Uploading Files to Azure Blob Storage in UiPath Processes

Level of Difficulty: Intermediate – Senior.

This post will assist in using Azure Blob Storage to store files used in UiPath processes.

A funny thing happened the other day… Jackson broke the news to the team that his laptop was stolen. After some panic, he lets everyone know that at least the latest solution was checked into source control (and is available on the Orchestrator). That brings some relief, until Barbra asks if the look up files are in source control too. With a worried look on his face, Jackson says, “No… Neither are any of the reference files that are referenced in the queue item.”

After much excitement, Jeremy (the boss man), mandates a central storage repository to be implemented so that this doesn’t happen in the future and so that local development isn’t relied upon. After some investigation, the team decided that Azure Storage would be a great way to go as it fits in with the general architectural direction that the company is heading in. Here’s the question though, how?

What are the steps?

The steps that we will be following are:

  1. Create an App Registration
  2. Assign the correct API permissions
  3. Gather the necessary info
    1. Application ID
    2. Tenant ID
    3. Client Secret
  4. Create Azure Storage Resource
  5. Use the created resources in UiPath
    1. Add Azure Scope
    2. Add Get Storage Account Activity
    3. Add Get Storage Account Key Activity
    4. Add Get Blob Container
    5. Add Upload Blob

Deep Dive

Let’s dive deeper into the steps listed above.

Create an App Registration

Once you have logged into Azure (https://portal.azure.com/), the first thing you want to do is create an App in App Registration and you can do that by doing the following:

  1. Go to Azure Active Directory
  2. Go to App Registrations
  3. Click Register an application
  4. Provide a name and select authentication
  5. Hit Register

Next, you want to add the correct API permissions.

Assigning The Correct API Permissions

You will need to do the following to assign the correct API permissions:

  1. Inside the newly created App Registration, select API permissions
  2. Select Add a permission
  3. Add Azure Storage, user_impersonation

Now that you’ve got that set up, you want to get the Application ID, Tenant ID and Client Secret.

Gather The Necessary Info

In order to use the app that has just been registered, you’ll need to collect certain info. The info can be accessed as follows:

  1. Inside the newly created App Registration, select Overview
  2. Copy the Application (client) ID and the Directory (tenant) ID as it will be needed later
  3. Click on Client secrets, generate a secret
  4. Copy the secret and paste it somewhere for when you need it
  5. Click on Overview in the Azure Active Directory and copy the Tenant ID
  6. The Subscription ID will be visible in the Overview section of the Storage Account (once it has been created)

Now you should be ready to create your storage account.

Creating Azure Storage Resource

Hit Home in the top left hand corner and navigate to Resources.

  1. Hit new resource
  2. Search for Storage account
  3. Click Storage account – blob, file, table, queue
  4. Click Create
  5. Select Subscription and resource group (create one if necessary)
  6. Select storage account name, location, account kind (BlobStorage) and Replication (Read-access geo-redundant storage (RA-GRS))

Create Container

Great! Storage account created – now to create a container within the storage account:

  1. Open newly created Storage Account resource
  2. Click on Containers
  3. Click Add Container
  4. Give it a name and access level

Storage account created, container created, but how does the App Registration tie in? Well, we still need to add it, so let’s do that.

Give the App Registration access to the Storage Account

In order to give the Registered App access to the newly created Storage Account, do the following:

  1. Go to the Storage Account
  2. Select Access Control (IAM)
  3. Select Add, Role Assignment
  4. Assign Role to Storage Blob Data Contributor
  5. Search for the App Registration in the Select field
  6. Now add another Role and assign the App Registration to the Contributor Role

Now that we have everything set up in Azure, let’s get started in UiPath.

Using Azure Storage in UiPath

In order to use the resources created on Azure, the following needs to be done:

  1. Create a new process and give it a name
  2. Add the UiPath.Azure.Activities library
  3. Add the Azure Scope to the sequence
  4. Enter the following details gathered from above:
    1. ClientID
    2. ClientSecret
    3. SubscriptionID
    4. TenantID
  5. Add Get Storage Account Activity inside of the Azure Scope, with the following properties
    1. StorageAccountName
    2. ResourceGroupName
    3. Output StorageAccount to a variable
  6. Add Get Storage Account Key Activity below, with the following properties
    1. StorageAccount
    2. Output StorageAccountKey to a variable
  7. Add Get Blob Container with the following properties
    1. ContainerName
    2. StorageAccount
    3. StorageAccountKey
    4. Output BlobContainer to a variable
  8. Add Upload Blob from File with the following properties
    1. BlobContainer
    2. BlobName
    3. ContentType
    4. FilePath
    5. Type
    6. Output Blob to a variable

Now Jeremy, Jackson and the rest of the team can go about their business without worrying about stolen laptops and lost files.

The above solution is available on Github

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

[Python] Using Python for SQL Schema and Table Validation and Logging Progress to Microsoft Teams

Level of Difficulty: Intermediate to Senior

So you’ve reached created a database, you’ve created schemas and tables. You’ve managed to create an automation Python script that communicates with the database, however, you’ve reached a crossroad where you need to validate your SQL tables. Some of the validation might include ensuring that the tables exist and were created correctly before referencing them in the script. In the case of a failure, a message should be sent to a Microsoft Teams channel.

If that’s the case, this post might have just the code to help!

What are the steps?

The steps that the script will be following are:

  1. Create a webhook to your Teams channel
  2. If a schema was provided
    1. Get all the tables in the schema
    2. Validate that all the tables have a primary key
  3. If a schema was not provided but tables were
    1. Validate that each table exists
    2. Validate that each table has a primary key

Create a Webhook to a Teams Channel

To create a webhook to your teams channel, click on the three dots next to the channel name and select connectors:

Search for “webhook” and select “Configure” on the Incoming Webhook connector:

Provide your webhook with a name and select “Create”:

Be sure to copy your webhook and then hit “Done”:

Create The Python Script That Does The Validation

Import Libraries

import pandas as pd

import pyodbc
from sqlalchemy import create_engine, event
import urllib

import pymsteams

Create a Connect to SQL Method

The purpose of this method is to allow the reusable method to be accessed each time SQL needs to be accessed.

def connect_to_sql(server, user, password, database, auth):
    
    driver = '{SQL Server}'
    
    if user == None:

        params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};'.format(driver, server, database))
        conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
        engine = create_engine(conn_str)
        
        return engine
    
    else:
        
        params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(driver, server, database, user, password))
        conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
        engine = create_engine(conn_str)
        
        return engine

Create Send Teams Message Method

The method below enables the sending of a teams message to a channel. In order to configure this to work, add the webhook configured above to the method below.

def send_teams_message(text):
    
    webhook = '<add webhook>'
    
    # You must create the connectorcard object with the Microsoft Webhook URL
    myTeamsMessage = pymsteams.connectorcard(webhook)

    # Add text to the message.
    myTeamsMessage.text(text)

    # send the message.
    myTeamsMessage.send()

Create Validate Primary Keys Method

This method ensures that primary keys are validated and returns the status as well as the text to be sent to MS Teams.

def ValidatePrimaryKeys(engine):
    
    query = "select schema_name(tab.schema_id) as [schema_name], tab.[name] as table_name from sys.tables tab left outer join sys.indexes pk on tab.object_id = pk.object_id and pk.is_primary_key = 1 where pk.object_id is null order by schema_name(tab.schema_id), tab.[name]"

    df = pd.read_sql(query, engine)
    
    keysValid = True
    
    text = ""
    
    if len(df) > 0:

        text = 'Primary Key Validation Failed.\n\n'

        for index, element in df.iterrows():

            text += '{} does not have a primary key\n\n'.format(element['table_name'])
            
            keysValid = False

        # send_teams_message(text)
        
    return keysValid, text

Create Validate Tables Method

This method validates that the tables exist within the current database and returns the status as well as the text to be sent to MS Teams.

def ValidateTables(engine, tables):

    query = "select tab.[name] as table_name from sys.tables tab"

    df = pd.read_sql(query, engine)

    tables_split = []
    
    tables = tables.replace(' ','')

    if ';' in tables:

        tables_split = tables.split(';')

    elif ',' in tables:

        tables_split = tables.split(',')
    
    elif len(tables) != 0:
        
        tables_split = [tables]

    text = ""

    tablesValid = True

    for table in tables_split:

        if table not in df['table_name'].tolist() and (table != '' and table != None):

            text += 'Table not found in database: {}\n\n'.format(table)
            tablesValid = False

    if tablesValid:

        text = 'Table Validation Passed\n\n'

    else:

        text = 'Table Validation Failed\n\n' + text

    #send_teams_message(text)
    
    return tablesValid, text

Create Validate Schema Method

This method validates that the schema exists. Once the schema is validated, all tables in the schema are retrieved and their primary keys are validated.

def ValidateFromSchema(schemas, engine):
    
    text = ""
    tableText = ""
    
    schemas_split = []
    
    schemas = schemas.replace(' ','')

    if ';' in schemas:

        schemas_split = schemas.split(';')

    elif ',' in schemas:

        schemas_split = schemas.split(',')
    
    elif len(schemas) != 0:
        
        schemas_split = [schemas]
        
    isValid = True

    for schema in schemas_split:
        
        if (isValid):
        
            query = "SELECT schema_name FROM information_schema.schemata WHERE schema_name = '{}'".format(schema)

            df = pd.read_sql(query, engine)
            
            if (len(df) > 0):

                query = "select t.name as table_name from sys.tables t where schema_name(t.schema_id) = '{}'".format(schema)
                
                df = pd.read_sql(query, engine)
                
                tables = ",".join(list(df["table_name"]))
                
                validateTables = ValidateTables(engine, tables)
                
                isValid = validateTables[0]
                tableText += "{}\n\n".format(validateTables[1])
                
            else:
                
                isValid = False
                
                text += "Schema Validation Failed\n\n"
                text += "Schema not found in database: {}\n\n".format(schema)
                    
    if (isValid):
        
        text = "Schema Validation Passed\n\n"
        
        text = "{}\n\n{}".format(text, tableText)
        
    return isValid, text

Create Validate SQL Method (Equivalent to “main”)

This method acts as the main method and encapsulates all the methods in the correct order and executes the proceeding tasks.

def ValidateSQL(project, server, database, schemas, tables, auth, username, password):
    
    engine = connect_to_sql(server, username, password, database, auth)
    
    summaryText = None
    
    if (schemas != None):
        
        validateSchemas = ValidateFromSchema(schemas, engine)
        
        isValid = validateSchemas[0]
        text = validateSchemas[1]
    
    else:
        
        validateTables = ValidateTables(engine, tables)
    
        isValid = validateTables[0]
        text = validateTables[1]

    if isValid:
        
        summaryText = 'Primary Key Validation Passed\n\n'
        
        validatePrimaryKeys = ValidatePrimaryKeys(engine)
        
        isKeysValid = validatePrimaryKeys[0]
        
        pkText = validatePrimaryKeys[1]
        
        if isKeysValid:
            
            text += summaryText
            
        else:
            
            text += pkText
        
    else:
        
        summaryText = text
        
    text = "<strong><u>{}<u><strong>:\n\n{}".format(project, text)
                
    send_teams_message(text)
    
    return isValid

Calling the Validate SQL Method

The below is how you’d initialise the variables and use them to call the ValidateSQL method.

server = '<server>'
user = '<user>'
password = '<password>'
database = '<database>'
auth = 'SQL' # or Windows
schemas = '<schemas comma separated>'
tables = '<tables comma separated>'
project = '<project>'

ValidateSQL(project, server, database, schemas, tables, auth, user, password)

And that’s a wrap Pandalorians! The Github repo containing the script is available here. Did this help? Did you get stuck anywhere? Do you have any comments or feedback? Please pop it down below or reach out – jacqui.jm77@gmail.com

[Python] Automating ASP.NET Core Web API Creation That Communicates With Your Database in 60 Seconds or Less

Level of Difficulty: Intermediate

So you’ve created a database and now you need to make it available for third party access without actually giving people a username and password to the database. You’re familiar with how C# works and how beneficial an ASP.NET Core Web API would be in developing this solution. You have a crazy deadline though and you start wishing that some of the ground work could be automated, allowing you to enter a few parameters and the rest is automagic!

Well here’s a solution that might help and it’s all written in Python!

What are the steps?

The steps that the script will be following are:

  1. Create the project
  2. Scaffold the database
  3. Delete the default template content
  4. Create the controllers

Deep Dive

Let’s dive deeper into the steps listed above.

Create The Project

The project can be created using the dotnet CLI thus being executable from a Python script. This does mean that the dotnet CLI would need to be installed. If it is not yet installed, you can install it from here.

The project creation step consists of a few steps:

  1. Create a reusable “Execute Command” method
  2. Create an ASP.NET Core Web API project
  3. Create a solution (.sln) file
  4. Add the Web API Project to the solution

Initialise Variables

The following library is to be imported and the variables initialised so that the script will work properly:

import os

project_path = r"{}\Projects".format(os.getcwd()) # points to a folder named projects created in the same directory as this script
project_name = "TestScriptWebAPI" # the name of the project (and file that groups the project/solution files together)

start_directory = os.getcwd() # the directory of the script
start_time = datetime.now() # the time that process started

Execute Command

This method allows for any command to be executed, provided that the command and the appropriate file path are provided.

def ExecuteCommand(command, file_path):
    
    # if the file path exists and is not empty, change to the directory else return False and "File path not valid"
    if file_path != None and os.path.exists(file_path):

        os.chdir(file_path)

    else:

        return False, "File path not valid" # False depicts that the command did not run successfully due to the invalid file path

    command_output = os.popen(command).read() # command is executed

    return True, command_output # True depicts that the command was executed successfully, however, it might not be the desired out put which is why the command_output is also returned

Create an ASP.NET Core Web API Project, Solution and Linkage

This method is used to create the project, the solution and the linkage between the two.

def CreateWebAPI(project_name, project_path):
    
    # create the solution path if it doesn't exist yet
    solution_path = r"{}\{}".format(project_path, project_name)
    
    if (os.path.exists(solution_path) == False):
        
        os.mkdir(solution_path)

    # this is the command that will be run in order to create a new project.  Customising the project before creation would occur here
    command = "dotnet.exe new webapi --name {} --force".format(project_name)

    result = ExecuteCommand(command, project_path)[0]
    
    if result:
        
        print("Project successfully created")
        
    else:
        
        print("Project not created")

    # this is the command that will be run in order to create a new sln.  Customising the project before creation would occur here
    command = "dotnet.exe new sln --name {} --force".format(project_name)

    result = ExecuteCommand(command, solution_path)[0]
    
    if result:
        
        print("Solution successfully created")
        
    else:
        
        print("Solution not created")
            
    # this is the command used to add the project to the solution
    csproj_path = r"{0}\{1}\{1}.csproj".format(project_path, project_name)
    command = 'dotnet.exe sln add "{}"'.format(csproj_path)
    solution_path = r"{}\{}".format(project_path, project_name)
    
    result = ExecuteCommand(command, solution_path)[0]
    
    if result:
        
        print("Project successfully added to solution")
        
    else:
        
        print("Project not added to solution")

Now that the project has been created and added to the solution, the appropriate libraries can be installed so that the database can be scaffolded.

Scaffold Database

The database would need to already be created and validation on the database would already need to have happened. The validation would include ensuring that all tables contain primary keys.

Scaffolding the database consists of the following steps:

  1. Install the libraries
  2. Compile the command and connection string
  3. Scaffold the database


Install The Libraries

This method installs the desired packages.

def InstallPackages(solution_path):
    
    # Install SqlServer
    command = "dotnet add package Microsoft.EntityFrameworkCore.SqlServer -v 3.1.1"
    command_result = ExecuteCommand(command, solution_path)
    
    # Install EF Design
    command = "dotnet add package Microsoft.EntityFrameworkCore.Design -v 3.1.1"
    command_result = ExecuteCommand(command, solution_path)
    
    # Install EF Tools
    command = "dotnet add package Microsoft.EntityFrameworkCore.Tools -v 3.1.1"
    command_result = ExecuteCommand(command, solution_path)

This method compiles the command string from the database connection and tables.

def CompileCommandStrings(auth, server, database, tables, username, password):
    
    # the models are created in the Models folder
    # the context is created in the Data folder

    if auth == 'Windows':
        
        connection_string = text = '"ConnectionStrings": \n\t{"DefaultConnection": "Initial Catalog=' + database + ';Data Source=' + server + '; Trusted_Connection=true;"},\n"Logging"'
        command = 'dotnet.exe ef dbcontext scaffold "Initial Catalog={};Data Source={}; Trusted_Connection=true;" Microsoft.EntityFrameworkCore.SqlServer -o Models --context-dir Data'.format(database, server)

    if auth == 'SQL':

        connection_string = text = '"ConnectionStrings": {\n\t"DefaultConnection": "Password=' + password + ';Persist Security Info=True;User ID=' + username + ';Initial Catalog=' + database + ';Data Source=' + server + '"},\n"Logging"'
        command = 'dotnet.exe ef dbcontext scaffold "Password={};Persist Security Info=True;User ID={};Initial Catalog={};Data Source={}" Microsoft.EntityFrameworkCore.SqlServer -o Models --context-dir Data'.format(password, username, database, server)

    if tables != '':

        split_table = []

        tables.replace(' ','')

        if ',' in tables:

            split_table = tables.split(',')

        if ';' in tables:

            split_table = tables.split(';')

        for table in split_table:

            command += ' -t {}'.format(table)

    command += ' -f'
    
    return command, connection_string

This method is used to scaffold the database into the project.

def ScaffoldDatabase(auth, server, database, tables, username, password, project_path):
    
    solution_path = r"{}\{}".format(project_path, project_name)
    
    InstallPackages(solution_path)
    
    result = CompileCommandStrings(auth, server, database, tables, username, password)
    command = result[0]
    connection_string = result[1]
        
    command_result = ExecuteCommand(command, solution_path)
    
    print(command_result)

Delete Template Content

When spinning up a project, a WeatherForecast.cs model is created along with a controller. These default classes need to be deleted so that they don’t interfere with the project.

This method deletes the template model and controller files that have been created with the project:

def DeleteTemplateFiles(project_path, project_name):

    # delete the template WeatherForecast.cs Model class
    template_model = r"{}\{}\WeatherForecast.cs".format(project_path, project_name)

    if os.path.isfile(template_model):
        os.remove(template_model)

    # delete the template WeatherForecast.cs Controller class
    template_controller = r"{}\{}\Controllers\WeatherForecastController.cs".format(project_path, project_name)

    if os.path.isfile(template_controller):
        os.remove(template_controller)

Create The Controllers

Creating the controllers requires the following steps:

  1. Get the context name
  2. Get the model
  3. Compile the controller from the template
  4. Create the controllers


Get The Context Name

This method gets the context class name.

def GetContext(file_path):
    
    # the file path should be the path to where the context class was created    
    f = open(file_path, "r")

    context_name = None
    
    for line in f.readlines():

        if '_context' in str(line) and 'private readonly' in str(line):
            
            line = line.replace('  ', '')
            context_name = line.split(' ')[2]
            
    return context_name


Get The Model

This method gets the model class and returns the class name, attribute as well as the namespace.

def GetModel(file_path):

    # file path should depict the path to the Model folder
    f = open(file_path, "r")

    class_name = None
    attributes = []
    namespace = None

    # for each line in the model class, extract the class name, the attributes and the namespace
    for line in f.readlines():

        if 'namespace' in str(line):

            namespace = line.split(' ')[1].split('.')[0]

        if 'public' in str(line):

            line = line.replace('  ', '')

            split_line = line.split(' ')

            if split_line[2] == 'class':

                class_name = split_line[3].replace('\n','')

            else:

                attributes.append(split_line[2])
                                                
    return class_name, attributes, namespace


Compile The Controller From The Template

This method compiles the controller class from the controller template.

def CompileControllerTemplate(model, attributes, file_path, template_file_path, namespace, context_name, id):

    file = open(file_path, "w+")

    template_file = open(template_file_path, "r+")

    template = template_file.read()

    template = template.replace('FillNamespace', namespace)
    template = template.replace('FillContext', context_name)
    template = template.replace('FillModel', model)
    template = template.replace('fillModel', model[0].lower() + model[1:])
    template = template.replace('FillId', id)

    file.writelines(template)

    file.close()
    template_file.close()


Create Controllers

This method creates the controllers.

def CreateControllers(solution_path):

    # initialise the model_file_path
    model_file_path = r"{}\Models".format(solution_path)
    
    # for each model found in the model file, get the model info and use it to create the controller
    for file in os.listdir(model_file_path):

        if file != 'ErrorViewModel.cs':

            file_path = "{}\..\Data".format(model_file_path)

            for context in os.listdir(file_path):

                context_name = context.replace('.cs','')

            file_path = '{}\{}'.format(model_file_path, file)

            model_result = GetModel(file_path)

            model = model_result[0]
            attributes = model_result[1]
            id = attributes[0]
            namespace = model_result[2]            
            path = r'{}\Controllers'.format(solution_path)
            file_path = r"{}\{}Controller.cs".format(path, model)
                        
            template_file_path = r"{}\APIControllerTemplate.cs".format(start_directory)

            if os.path.exists(path) == False:

                os.mkdir(path)

            CompileControllerTemplate(model, attributes, file_path, template_file_path, namespace, context_name, id)

Now you can hit F5 and test your API functionality. Did you get stuck anywhere or do you have feedback? Please feel free to drop it below.

Future Enhancements

Future enhancements of this code include adding SQL validation prior to the web API project creation as well as the implementation of a design pattern through the use of templates, similar to the way it was done in order to create controllers.

The project code is available here.

[Automation]: Using Power Automate with SQL Server (Primary keys that aren’t int)

Level of Difficulty: Beginner – Senior.

Are you in a situation where you’re using a SQL database and you’ve already designed, created and gone through all 700 iterations of the review? Primary keys already created of type uniqueidentifier with default value of NEWID(). Then Fred comes with his clever idea of adding some sort of automation workflow on top of the database – you know, like Power Automate?

So you go down a tangent of getting info together around Power Automate triggers and actions, specific to SQL Server. You come across the two triggers: When an item is created (V2) and when an item is modified (V2).

You decide on using SQL Server “When an item is created (V2)” as a trigger and Office 365 Outlook “Send an Email (V2)” as an action.

You go into Power Automate and you connect it up to your database but when you hit test and “I’ll perform the trigger action”, you wait… And wait… And wait… ? … ? … And wait. Nothing’s happening?

You do more research and you find that most people use a primary key of type int and it works for them. Now you are running ahead with everything that needs to change in order to get this to work. You think of workarounds like creating views, using a different trigger, using a different automation type, heck… Even using a different workflow tool. But wait! That’s not necessary.

Here’s what to do:

  • Alter your table design in SQL Server Management Studio (SSMS)
  • Add another row, named something like “PowerAutomateID”
    • Make the type int
    • Untick “Allow Null”
    • Scroll down in the table properties and set Identity Specification to Yes
  • Save
  • Go test your flow

There’s no need to make the new column a Primary Key as the Power Automate trigger looks for the column that has Identity Specification set. I had a web form built on top of my database and none of the CRUD (create, read, update and delete) functionality had to be changed. It all continued to work properly, even the Power Automate process.

This is definitely something I wish I knew when I first went on this journey a few weeks ago.

If you hadn’t come across this in your research, you would most likely have loved to have landed on the Microsoft Docs that explain some of the limitations.

Still not working or may you found a better way? Drop a comment below or send an email through to jacqui.jm77@gmail.com

[Et al.] Look After The BAs!

As a Developer in an Agile environment, I want to appreciate the Business Analysts (BA) so that we can continue working in a respectful and collaborative manner.

Sounds like a user story right? Well maybe it should be so that we can persistently practise this principle. Often we are faced with feuds between colleagues in the tech industry (most other industries have this too) due to a difference in opinion or even work ethics. Sometimes personalities clash, the pressure and tension might get a little too much, sometimes requirements are nearly impossible to meet in the unrealistic timelines presented to the stakeholders and sometimes those timelines were decided upon without consulting the technical team.

All of the above can be considered valid points and sometimes they raise valid concerns. Granted, it is a two way street – it is important to maintain bidirectional respect between developers and BAs.

I used to be a developer who was quick to judge non-technical colleagues based on the textbook description of their job title and the reality of how they carry it out. I have recently started working with a team of BAs who have fundamentally changed the way I see non-technical colleagues (especially BAs) who set out to do the best job that they possibly can do, in-line with their job description!

That’s good and all but what is a BA really? A Business Analyst is an agent of change. They work together with organisations to help them improve their processes and systems. As developers and technical “resources” we tend to make their life a lot more complicated than that.

Those involved in Robotic Process Automation (RPA) projects, would understand why I would say that the bulk of the work lies in the process understanding phase which is generally conducted by the BA (and in some instances the Solution Architect).

I have found that developers often look down on BAs as many of them don’t possess the skill to write code. Now based on what I have experienced working with a team of highly talented BAs, I have come to realise that the greatest difference between BAs and the developers is that they communicate the same process in different ways (or languages).

Although a developer is able to translate a business requirement specification, process definition document or even a functional specification document into code, it is important to remember that it remains an in-depth depiction of the process outlined by the BA.

Okay cool… But how can I do my part as a developer? Good question! Here are a few things you should keep in mind when working with BAs:

  • We enable those who enable us. A good BA may come across as someone who tries to make your life easier when in actual fact, they are trying to simplify a process. Help your BA help you by providing assistance with some technical aspects of a situation if and where possible.
  • Appreciating and respecting anyone will generally result in improved efforts made on a project. It is a common reaction to respond more positively when feeling respected and appreciated.
  • Be patient. Many BAs may have an ambition to gather more technical understanding around certain things that you have insight over, however, they may not always understand your predicament. The best advice I have gotten over the last months is to ELI5 (Explain like I’m 5). It has helped me tremendously in translating the highly technical explanations in a way that removes frustration for myself (as the developer) and the BA(s). In the process of ELI5, enlighten and enable the BA to understand the situation for future reference. If explained well enough, once is enough and twice is a revision.
  • Learn. There is always an opportunity to learn new things. There’s nothing stopping the devs from understanding a BAs process or a BA understanding the devs process.
  • Stick together. I cannot emphasise how important this point is. Dealing with the business (internally or externally) is strenuous enough, dealing with tension and pressure internally can often cause conflict between colleagues too. Sticking together against stakeholders and sharing the pressure as a team helps keep the team positive and often removes some of the additional pressure. Sometimes it helps just knowing that you’re not in something alone.

What happens when you have a personality clash with the BA? Get a new one? No, it doesn’t always work like that. Like I have mentioned above (and will mention further down too), this remains a two way street. Sometimes it is a good idea to discuss an ideal working situation with the BA that you may be clashing with. It is important that all parties respect and participate in the above. The responsibility does not solely lie with the developer, however, a first step goes a long way!

Great… But how do you spark a great working relationship? Get your colleague a coffee, or tea, or hot chocolate… Or water. Or a bunch of sweets.

To the BAs and other non-techies / semi-techies out there, thank you for the work that you do! You are appreciated and hopefully your dev team will show their appreciation too. Feel free to reach out to them too. It’s a two way street.

Let’s take this opportunity to be better together!

[Python]: Auto-Format Code According to PEP 8

Level of Difficulty: Beginner to Senior.

So you’ve written a Python Script, Kevin has just told your team that all code needs to be reviewed and part of the team’s Key Performance Indicators (KPIs) will be PEP 8 compliance.

Now there’s a slight sense of chaos and stress because “What is PEP 8?! Can one eat it? Is it something that we take to retro?” Mark has a minor panic attack because, not only does he not know what PEP 8 is, but how on earth will he enforce it? Joan is passive but has reservations because the “rather complex” script has already been written, how will she refactor it all in time for her 3PM appointment with Call of Duty?

Don’t worry Joan (and team)… We’ve got you covered! It’s rather important to know what PEP 8 is but as for enforcing it, there are libraries that will assist in doing it for you.

But wait… There’s more, Kevin might even use one of these libraries to check if PEP 8 has been adhered to. Here’s how with the Black and\or autopep8 libraries.

What are the steps?

The steps that we will be following are:

  1. Find out what PEP 8 is
  2. Create the Python script
  3. Decide which library to use (black and\or autopep8) and install it
  4. Format your script

Deep Dive

Let’s dive deeper into the steps listed above.

What’s PEP 8?

PEP 8 is a set of standards and coding styling guides for Python code. Keeping code consistent and in accordance to a set of standards makes code easy to debug and maintain.

Let’s check it out in action.

Create The Python Script

Mark would like to experiment with this “Auto Black 8 and PEP” concept, so he creates a basic python script and saves it to C:\Python Scripts\PEPTest.py:

import sys, os
# try to print the current Python interpreter path and current working directory path
def print_paths():
	try:
		print(sys.executable)
		print(os.getcwd())
	except Exception as e:
		print(e)
print_paths()

Install Selected Library

Joan explains to Mark that he should definitely check this blog post out, then test the capability of black before testing autopep8 and deciding which one he would like to use. So off Mark goes to follow Joan’s advice.

Black

Mark installs the black library using the following command from command line:

py -m pip install black --user

Now that black has been installed, Mark runs the following command to see which commands he can use with black:

py -m black -h

The following catch Mark’s eye:

–check Don’t write the files back, just return the status.
–diff Don’t write the files back, just output a diff for each file on stdout.

Mark decides to check if there would be changes if he were to use black by using the –check argument:

py -m black PEPTest.py --check

Mark sees the following in the command line:

Now that he knows his code will be formatted, he wants to see how they would be formatted so he runs the following command:

py -m black PEPTest.py --diff

Mark sees the following in the command line:

Mark is experiencing excitement and tingles in places. Now he wants to see if this will actually change his code, so he runs the following code:

py -m black PEPTest.py

He opens up the file in Notepad and sees that his code has been formatted!

Now he wants to see if autopep8 does the same.

Autopep8

Mark replaces the formatted code with the following:

import sys,os
# try to print the current Python interpreter path and current working directory path
def print_paths():
    try:
        print(sys.executable)
        print(os.getcwd())
    except Exception as e:
        print(e)
print_paths()

Mark installs the autopep8 library using the following command from command line:

py -m pip install autopep8 --user

Now that black has been installed, Mark runs the following command to see which commands he can use with black:

py -m autopep8 -h

The following catch Mark’s eye:

–aggressive enable non-whitespace changes; multiple -a result in more aggressive changes
–in-place make changes to files in place
–diff print the diff for the fixed source

Mark decides to see if there would be changes if he were to use black by using the –diff argument:

py -m autopep8 PEPTest.py --diff

Mark sees the following in the command line:

Now he wants to see if this will actually change his and how “aggressive” it really is, so he runs the following code:

py -m autopep8 PEPTest.py --aggressive

For giggles, he adds –in-place to see what happens. He sees the following:

He opens up the file in Notepad and sees that his code has been formatted!

In the meantime, Joan had run through the same steps and after more confederating around the coffee machine, they came to a consensus that the “aggressive” use of autopep8 seems to be more beneficial to the team and as such, that’s what they would use to format their code before sending it for review.

Kevin decided he would stick to black for code review. Jokes on him right?

Wrong? Drop your comment below or reach out to jacqui.jm77@gmail.com

[Python]: Setting up alerts on SQL Server Job Agent failures

Level of Difficulty: Beginner to Intermediate.

So you’ve found yourself in a semi-ops situation where you need to do daily checks on SQL Server Agent Jobs because the client, Steve, has concerns that his data is not updating.

Now you have to go through the manual process of opening SQL Server Management Studio then logging into the server, clicking on the SQL Server Agent and actually checking the status of the jobs every so often when, in reality, you could just get notified when something fails. Sounds like a good idea right? Here’s how to do it!

What are the steps?

The steps that we will be following are:

  1. Create the Python script
    1. Import Libraries
    2. Enter server details
    3. Select and configure a database engine
    4. Get jobs with their statuses
    5. Filter for failures that occurred today
  2. Set up alerting
  3. Schedule Python

Deep Dive

Let’s dive deeper into the steps listed above.

Create The Python Script

Create a Python script named JobMonitoring.py.

Ensure that the pandas and either pyodbc (ensure sqlalchemy is also installed if you are using pyodbc) or pymssql libraries are installed. You can install these libraries by executing the following commands:

Using pymssql

pip install pandas
pip install pymssql

Using pyodbc

pip install pandas
pip install pyodbc
pip install sqlalchemy

Import libraries

Use the code below to import the correct libraries if you are using pyodbc:

import pandas as pd
import pymssql
import pyodbc
from sqlalchemy import create_engine

Use the code below to import the correct libraries if you are using pymssql:

import pandas as pd
import pymssql

Enter Server Details

Add the necessary information to the JobMonitoring.py script or read from a config file (recommended). Ensure that the database is set to read from msdb, unless there’s a stored procedure that lives elsewhere:

server = ''
user = ''
password = ''
database = 'msdb'

Configure Database Engine

If you are using pyodbc, configure the server using the following code:

params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(driver, server, database, user, password))
conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
conn = create_engine(conn_str)

If you are using pyodbc, configure the server using the following code:

conn = pymssql.connect(server, user, password, database)

If you are using Windows authentication to get into the SQL Server, remove user and password from either of the above methods.

Get Jobs with Statuses

For this portion of the code, we will be relying on one of the functions built into the msdb, sp_help_jobactivity. The following code needs to be added to read the results into a pandas dataframe:

df = pd.read_sql('EXEC msdb.dbo.sp_help_jobactivity;', conn)

Are you facing permission challenges?

Ask the admin of the database to create a stored procedure for you to run and replace the msdb.dbo.sp_help_jobactivity in the code above with SQLAgentJobHistory. The stored procedure should look as follows:

CREATE PROC SQLAgentJobHistory
AS
BEGIN

	EXEC msdb.dbo.sp_help_jobactivity; 

END;

Filter for Failures that Occurred Today

Back in JobMonitoring.py, add the following code to apply the filters:

# order by last_executed_step_date
df['last_executed_step_date'] = pd.to_datetime(df['last_executed_step_date']).apply(lambda x: x.date())
df['last_executed_step_date'] = df['last_executed_step_date'].astype(str)

# create a dataframe that contains jobs executed today
df_today = df[df['last_executed_step_date'] == datetime.today().strftime("%Y-%m-%d")]

# create a dataframe that contains the jobs that have failed
df_failed = df_today[df_today['run_status'] == 0]

Setup Alerting

Inside of JobMonitoring.py, create a method named alert:

def alert(job_name, last_run):
    
    # add method of alert, ie. MS Teams, Slack, API call, etc.
    print('{} failed at {}'.format(job_name, last_run))

Check out this repo for ways of incorporating alerting via certain instant messaging platforms.

Add the following code to call the alert method that was just created:

if len(df_failed) > 0:
    
    for index, element in df_failed.iterrows():
        
        alert(element['job_name'], element['last_executed_step_date'])

Schedule Python Script

Let’s schedule this script to run on your local PC or server, using Task Scheduler.

  1. Hit Start and search for Task Scheduler
  2. Under action items, hit Create Task…
  3. Give your task a name and description
  4. Navigate to the Triggers tab and hit New…
  5. Customise the schedule as necessary and make sure that Enabled is checked
  6. Navigate to the Actions tab and click on New…
  7. Assign the Program/Script to your default interpreter, here’s how to find your interpreter
  8. Grab the path of the JobMonitoring.py script and paste it in Arguments
  9. Run through the settings in the rest of the tabs to see if there’s anything else you’d like to customise to your need.
  10. Select OK
  11. Now right click on the job in Task Scheduler and select Run

The above code is available in this Github repo.

Consider your job scheduled! Did you get stuck somewhere or is there anything you want to add? Drop a comment or send your question/ suggestion through to jacqui.jm77@gmail.com

[Python]: The 5WH of Getting Started with Python

Level of Difficulty: Beginner.

You’ve decided to expand your knowledge by learning the basics of Python, but you find yourself with the 5WH in mind, asking the following questions:

  • Who uses Python?
  • What would you use to code Python?
  • Where to use Python
  • When to use Python?
  • Why would you use Python?
  • How to get started with Python?

These are all valid questions and here are the answers.

Who uses Python?

Python is used by people who are looking to write basic scripts, all the way through to web developers who want to write a website, data engineers who need to wrangle data and even data scientists who build models. Some even use it as an introduction into learning how to program.

What would you use Python with?

Generally, the tools we use to code in a specific language are known as Integrated Development Environments (IDEs) and there are a few available for Python. The most common ones are:

My favorites include Jupyter Notebook, PyCharm Community and using Python through command line.

Where, when and why to use Python?

Python can do almost anything that any other technology stack can do. Maybe not as elegantly, but the capability is definitely there. So my answer to this question is that you use Python whenever and wherever you need it and feel comfortable implementing it.

How do you determine whether or not you’re comfortable with Python, you might ask? Well, are you confident that your solution is modular enough to be expanded upon, tests will be passed, will continuous integration and continuous deployment be possible and is there enough logging to debug easily if something goes wrong after deployment to production?

If the answer is no, that’s completely okay, that indicates room for growth.

How to get started Python?

A few “cool-things-to-know” when learning Python are:

  1. You should probably pick an IDE and install it (along with Python)
  2. print(‘Hello World’) or print(“Hello World”) – same same but different
  3. Basic Python Structure and Syntax
  4. Navigating through directories and working with files
  5. Working with pandas and data frames
  6. Visualising data using Pandas
  7. CSV to JSON manipulation
  8. Working with API Calls (SOAP and REST with or without authentication)
  9. Basic logging to file
  10. Working with SQL Databases

The basic understanding of the above ten points have shaped my Python journey and what a journey it is turning out to be.

Any more points you feel should be included in the list above? Drop a comment below or pop it through to jacqui.jm77@gmail.com

[Python]: Using Python Through Command Line

Level of Difficulty: Beginner.

Sandra has asked you if you can give her a crash course on how to use Python through the command line. So you decide to show her how to run a few Python commands and maybe install a library. Here’s what you can do.

What are the steps?

The steps that we will be following are:

  1. Finding the appropriate Python interpreter command
  2. Running test commands
    1. Create a new Python script
    2. Run the Python script
  3. Install a library

Deep Dive

Let’s dive deeper into the steps listed above.

Finding The Appropriate Python Interpreter Command

Open up command line prompt by hitting the Windows key + r. This will open the run pop up.

In here, type cmd and hit enter. This will open your command line prompt.

Try one of the following commands until you find one that successfully opens Python in the command line for you:

  • python
  • python3
  • py
  • python.exe (if you’re executing from a virtual environment)

In my case, py is the command that instantiates Python in command line, as seen above. Now run the exit() command to exit Python so that we can create the Python script

Running Test Commands

Let’s create a basic print(“Hello Universe”) Python script named TestScript.py by using the following command:

echo print('Hello Universe') > TestScript.py

You should be able to see the newly created file if you run the following command:

dir *.py

Let’s check what happens if we try executing this Python script using by running: (please note that if py did not work for you, use the one that works instead)

py TestScript.py

You should see the following:

Let’s say Sandra asks how to edit the Python code an IDE, run the following command:

TestScript.py

This should pop the code open in the default Python IDE: (mine is PyCharm)

Installing a Library

Now let’s test pip installing a library using the following command: (using the numpy library as an example)

py -m pip install numpy

You’d ideally like to see an indicator that the install was successful. Are you seeing the following?

If you are seeing the same as me, try the command below:

py -m pip install numpy --user

You should now be seeing the following:

After this, Sandra should know enough to get her through the basics of using Python through Command Line

Got stuck somewhere? Want to give some feedback? Please leave a comment below or reach out to jacqui.jm77@gmail.com