Many automation solutions make use of the functionality provided by mail services as it serves as an important element that allows for communication between humans and the automation process. There are many benefits provided by using Google Mail (Gmail), one of which is cost – for that reason, this post will focus on providing a step-by-step guide of how to monitor emails coming into your Gmail inbox, with the ability to monitor specific labels.
It is also important to note that there are tools and platforms that make it much easier to perform these actions but as developers, we know that life cannot always be “easy”. This post aims at empowering the “not easy” solutions.
What are the steps?
The steps that we will be following are:
Ensure that your Gmail account is configured correctly
Import the libraries
Gather variable values
Email address
Password
Label
Search Criteria
Define methods
Get Body
Search
Get Emails
Authenticate
Authenticate
Extract mails
Extract relevant information from results
Deep Dive
Let’s dive deeper into the steps listed above.
Ensure that your Gmail account is configured correctly
My first few attempts at this left me pulling my hair out with an “Invalid Credentials” error. Upon much Googling and further investigation, I found that it is caused by a Google Account setting. This is quite easily fixable.
In order to interact with my account, I had to allow less secure apps (you can access that setting here):
Allowing less secure apps to communicate with my Gmail account
If you are still experiencing problems, here is a more extensive list of troubleshooting tips.
Import the libraries
Now let’s move over to Python and start scripting!
First, let’s import the libraries that we’ll need:
import imaplib, email
Gather variable values
In order to access the mails from the Gmail account we will need to know the answers to the following questions:
Which Google account (or email address) do we want to monitor?
What is the password to the above account?
What label do we want to monitor?
What is the search criteria?
The best way to find out is to ask and luckily we can do that through code:
imap_url = 'imap.gmail.com' # This is static. We don't ask the questions we know the answer to
user = input("Please enter your email address: ")
password = input("Please enter your password: ")
label = input("Please enter the label that you'd like to search: ") # Example: Inbox or Social
search_criteria = input("Please enter the subject search criteria: ")
Define Methods
It becomes easier to break some of the reusable elements up into methods (or functions) so that larger implementations of this solution are equipped to be easily scalable. Stephen Covey teaches us that starting with the end in mind is one of the habits of highly effective people – some might even call it proactive design thinking. The point is that it is good to think ahead when developing a solution.
Enough rambling, here are the functions:
# Retrieves email content
def get_body(message):
if message.is_multipart():
return get_body(message.get_payload(0))
else:
return message.get_payload(None, True)
# Search mailbox (or label) for a key value pair
def search(key, value, con):
result, data = con.search(None, key, '"{}"'.format(value))
return data
# Retrieve the list of emails that meet the search criteria
def get_emails(result_bytes):
messages = [] # all the email data are pushed inside an array
for num in result_bytes[0].split():
typ, data = con.fetch(num, '(RFC822)')
messages.aplend(data)
return messages
# Authenticate
def authenticate(imap_url, user, password, label):
# SSL connnection with Gmail
con = imaplib.IMAP4_SSL(imap_url)
# Authenticate the user through login
con.login(user, password)
# Search for mails under this label
con.select(label)
Authenticate
Before we can extract mails, we first need to call the authenticate method that we had just created and pass through the answers to the questions we asked further up:
authenticate(imap_url, user, password, label)
Extract mails
Next, we need to call the search and get_mails methods to extract the mails:
# Retrieve mails
search_results = search('Subject', search_criteria, con)
messages = get_emails(searhc_results)
# Uncomment to view the mail results
#print(message
Extract relevant information from results
Now, let’s work through the results and extract the subject using string manipulation. Feel free to add a “print(subject)” statement underneath the assignment of “subject” for debugging purposes:
for message in messages[::-1]:
for content in message:
if type(content) is tuple:
# Encoding set as utf-8
decoded_content = str(content[1], 'utf-8')
data = str(decoded_content)
# Extracting the subject from the mail content
subject = data.split('Subject: ')[1].split('Mime-Version')[0]
# Handling errors related to unicodenecode
try:
indexstart = data.find("ltr")
data2 = data[indexstart + 5: len(data)]
indexend = data2.find("</div>")
# Uncomment to see what the content looks like
#print(data2[0: indexend])
except UnicodeEncodeError as e:
pass
Did this work for you? Feel free to drop a comment below or reach out to me through email, jacqui.jm77@gmail.com.
The full Python script is available on Github here.
Does your organisation use Automation Hub to capture and consolidate automation ideas and collateral? Have you ever wanted to interact with the data you have in Automation Hub in an automated manner? Well UiPath makes that easier now with the Automation Hub API – no more front-end automations needed to access your data.
Here’s how it works. If you’re looking for specific queries that aren’t covered in this blog post, checkout this Postman collection.
What are the steps?
The steps that we will be following are:
Identifying the URL
Compiling and populating the bearer token
Adding the necessary headers
x-ah-openapi-auth = openapi-token
x-ah-openapi-app-key (only if you’ve assigned the app key when generating the token)
Grabbing the results
Generating the code for reuse in an automation
Deep Dive
Let’s dive deeper into the steps listed above.
Identify the URL
In order to identify which URL (or API Call) would achieve the task at hand, take a look at the different API Calls available here.
For the sake of this post, we are going to list all automations in our instance. Thus, we will be using the following API call:
First thing’s first. Make sure Open API is set up on your tenant. You can do that as follows:
Navigate to the Admin Console within Automation Hub
Hover over Platform Setup
Select Open API
Next, you’re going to want to hit Generate Token and enter the necessary details:
You’re also going to want to take note of your tenant ID because that’s what we are going to use to compile the Bearer Token:
The Bearer Token is made up by concatenating your tenant ID and your generated token separated by “/”.
An example is: 46b6c342-3ab4-11e9-9c19-37a5980a67e8/ce91aa04-fc61-49e9-bec5-cb237efb4bda where:
46b6c342-3ab4-11e9-9c19-37a5980a67e8 is the unique Tenant ID
ce91aa04-fc61-49e9-bec5-cb237efb4bdais the specific token generated for the user account
Add the Bearer Token under the “Authorization” tab in Postman with the Type set to “Bearer Token“:
Add Headers
If you have set up app key as an extra security measure when you generated the token, you’ll need to add “x-ah-openapi-app-key” to your headers and assign it to the value you created.
Regardless of whether you populated the app key or not, you’ll need to add “x-ah-openapi-auth” to your headers and assign it to “openapi-token“:
Grab The Response
Once you’ve hit send in Postman, you make a sacrifice to the universe and wait for your results which should look something liiiiikkkkeeee this:
Generate Code For Automation
Now that you’re getting results, you’ll most likely want to get this automated. Well then let’s get the code (for whatever language we want) from Postman.
Click on code in the top(ish) right-hand corner in Postman:
Select your language then copy and paste the code:
Did you get stuck anywhere? Was this helpful?
If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com
Often we create Power BI reports that require some sort of filtering mechanism but filters take up a lot of real-estate that could be used for other visuals instead. What if we could hide and show a filter pane using a “hamburger” menu mechanism?
We can and here’s how.
What are the steps?
The steps to using a hamburger menu to hide and show a filter pane in Power BI are:
Create the report layout and set up the filters
Add two hamburger menu images
Create a “hide” bookmark
Create a “show” bookmark
Add the bookmarks to the actions of each image
Align the two images
Create the report layout and set up the filters
To create the filters panel, add a rectangle (Home > Insert > Shapes) before adding and aligning the filter visuals on top of the rectangle.
An example would be something like this:
Add two hamburger menu images
Next we would want to add two hamburger menu icons (preferably SVG or PNG images with transparent backgrounds) next to one another (Home > Insert > Image).
Create a hide bookmark
In order to create a bookmark, you would need to ensure that the bookmark pane and selection pane are visible. You can do this by navigating to the View tab and ensuring that the Bookmarks Pane and the Selection Pane are both checked. This should allow you to see the following:
To create a hidebookmark you would need to hide all of the filters, the rectangle block and one of the hamburger menu images using the Selection Pane. To hide a visual (or an element), you can either click on the eye icon next to the visual in the selection pane or you can click on the element on the report and select hide on the selection pane.
Once all necessary visuals have been hidden, you should see this:
Next, you are going to want to bookmark the view that you are currently looking at by selecting “Add” in the BookmarksPane. This will result in “Bookmark 1” being created:
You can then rename the bookmark to “Hide Filters” by double clicking on the bookmark or by selecting the three dots next to the bookmark name (on the right) and selecting Rename:
Create a show bookmark
To create a “Show” bookmark, we are going to ensure that all of our filters are visible again:
Next we are going to hide the hamburger image that was visible in the “Hide” bookmark:
Then select “Add” in the Bookmark Pane and rename the bookmark to “Show Filters“:
Adding the bookmarks to the actions of each image
Now we need to add these bookmarks as actions to the correct hamburger image. Let’s start with the image that’s still visible. When we click that image, we are expecting our filters to Hide. So we want to link this image to the “Hide Filters” bookmark.
To do this, click on the image, navigate to the Format Image pane, ensure that Action is On (if it is Off, click on the round dot and it will turn on), expand Action, change Type to “Bookmark” and select the “Hide Filters” bookmark:
If you hover over the visible image, there should be a tool tip that appears:
If you hold Ctrl and click on the image, it will apply the bookmark and the filters (with its hamburger menu image) should disappear.
Now let’s repeat these steps for the image that is currently visible and assign the “Show Filters” bookmark to its Action:
Now you can lay the one hamburger image on top of the other the other so that they appear to be one image (you may need both images to be visible for this). Reorganise your report layout and play around with the other fancy things that you can do with bookmarks!
Just a note: It is possible to keep values of filters between bookmarks. It would require manipulation of bookmark properties. For this scenario, the data option would need to be deselected:
This post will assist in using Azure Blob Storage to store files used in UiPath processes.
A funny thing happened the other day… Jackson broke the news to the team that his laptop was stolen. After some panic, he lets everyone know that at least the latest solution was checked into source control (and is available on the Orchestrator). That brings some relief, until Barbra asks if the look up files are in source control too. With a worried look on his face, Jackson says, “No… Neither are any of the reference files that are referenced in the queue item.”
After much excitement, Jeremy (the boss man), mandates a central storage repository to be implemented so that this doesn’t happen in the future and so that local development isn’t relied upon. After some investigation, the team decided that Azure Storage would be a great way to go as it fits in with the general architectural direction that the company is heading in. Here’s the question though, how?
What are the steps?
The steps that we will be following are:
Create an App Registration
Assign the correct API permissions
Gather the necessary info
Application ID
Tenant ID
Client Secret
Create Azure Storage Resource
Use the created resources in UiPath
Add Azure Scope
Add Get Storage Account Activity
Add Get Storage Account Key Activity
Add Get Blob Container
Add Upload Blob
Deep Dive
Let’s dive deeper into the steps listed above.
Create an App Registration
Once you have logged into Azure (https://portal.azure.com/), the first thing you want to do is create an App in App Registration and you can do that by doing the following:
Go to Azure Active Directory
Go to App Registrations
Click Register an application
Provide a name and select authentication
Hit Register
Next, you want to add the correct API permissions.
Assigning The Correct API Permissions
You will need to do the following to assign the correct API permissions:
Inside the newly created App Registration, select API permissions
Select Add a permission
Add Azure Storage, user_impersonation
Now that you’ve got that set up, you want to get the Application ID, Tenant ID and Client Secret.
Gather The Necessary Info
In order to use the app that has just been registered, you’ll need to collect certain info. The info can be accessed as follows:
Inside the newly created App Registration, select Overview
Copy the Application (client) ID and the Directory (tenant) ID as it will be needed later
Click on Client secrets, generate a secret
Copy the secret and paste it somewhere for when you need it
Click on Overview in the Azure Active Directory and copy the Tenant ID
The Subscription ID will be visible in the Overview section of the Storage Account (once it has been created)
Now you should be ready to create your storage account.
Creating Azure Storage Resource
Hit Home in the top left hand corner and navigate to Resources.
Hit new resource
Search for Storage account
Click Storage account – blob, file, table, queue
Click Create
Select Subscription and resource group (create one if necessary)
This post will assist in understanding the practical use of design patterns and how they can be applied to the development of processes within UiPath.
What’re Design Patterns?
In a nutshell, a design pattern can be described as a template (or format) that is used to structure code in order to solve a problem in a systematic way. There are multiple different types of design patterns which include a variety of behavioral, structural and creative patterns. These are explained in depth here.
Design patterns are often used to assist in implementing OOP (Object-Orientated Programming) in order to make a solution more modular and reusable. Design patterns can also be used to maintain consistency across solutions so that multiple developers have a clear understanding of how the solution flows from one step to the next.
It is very important to keep design principles (like SOLID) in mind when deciding when and which design pattern should be used within a solution. SOLID is explained nicely here.
Why’s This Important?
When working with Robotic Process Automation (RPA) solutions, it is important to remember that the solution is assumed to have a short lifespan that could be anything from 6 months to 24 months long before facing re-evaluation. Once the value of an RPA solution is re-evaluated and certain aspects need to be enhanced or upgraded, it is easier to do so if the design pattern caters for foreseeable changes by design.
For example, an RPA process that handles reading Excel files that contain stock take information and storing it into an on-premise SQL server already exists. The solution has been re-evaluated after 12 months and the audit has found that new scanners will be used for stock taking rather than manually filling in Excel spreadsheets. The automation needs to be enhanced to accept sensor data from the scanner instead. If all logic that connects to the Excel files is referenced in multiple sequences/workflows logic lived all over the place, the enhancement could create some havoc. By adding a data access/acquisition layer to the solution, only one sequence/workflow will connect to the Excel file and only that sequence/workflow would be affected. All data access to the file is then also modularised.
What’d Such an Implementation Look Like?
Let’s identify some of the possible groupings of functionality:
Initialisation/ Preparation
Data Access/ Acquisition
Domain/ Business Logic
Modular/ Helper Components
Tests
Output
The above-mentioned groups can be further split, as seen below:
RPA Design Pattern
As seen above, the idea behind the proposed design pattern is to ensure the appropriate grouping of functionality. Applying a design pattern like this could be as simple as implementing a similar file structure that groups the relevant workflows.
An example of this is illustrated in A UiPath process, as seen below:
RPA Design Pattern in UiPath
A template of the above UiPath process is available for download here.
Design patterns can be modified as needed. The most important aspect of planning the implementation of a design pattern is to determine how the functionality should be split, modularisedand reused (potentially even between projects) while remaining considerate of future operational support.
If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com
Along with the emergence of the Fourth Industrial Revolution (4IR) we are seeing an increased usage of rebranded terms, that now contain the word “Intelligent”.
We find ourselves surrounded by concepts like Intelligent Automation, Intelligent Digital Workplaces, Intelligent Customer Experience, Intelligent Infrastructure and many more. How much longer until we are faced with the new and improved concept of Intelligent Development as a rebrand of DevOps?
What is “Intelligent Development”?
Intelligent Development, in the context of software development, would refer to concepts, designs and approaches that developers could explore in order to create solutions that would continue to develop, maintain and even enhance themselves.
A large portion of the concepts that may form the basis of Intelligent Development would most likely be adopted from modern DevOps elements. By applying concepts like critical and design thinking to the planning of a solution, the solution may be developed in a manner that is sophisticated enough to cater for foreseeable use cases (like further repetitive or predictable development, maintenance and enhancements). Thus, it can be argued that if architects and developers work together to develop intelligently, Intelligent Development would be no more than a “fancy” term to encapsulate a concept that is much more powerful than the catch-phrase depicts.
An example of a high-level implementation of Intelligent Development is depicted in the figure below:
Developer Pre-work and Intelligent Development
In the figure above, the existing concept of using source control repositories and pipelines is expanded upon to accommodate for consistent (or standard) elements of a solution. With this approach, the developer is also allowed the opportunity to review the changes before accepting the pull request and kicking off the publishing process – this ensures that the appropriate level of visibility and responsibility are still maintained. Let’s consider an example.
There is a need for a Web API that can be expanded to onboard new data sets (with consistent data source types, ie different SQL Server Databases with SQL Server as the consistent data source type) over time. Each time that a new data set is introduced, the solution would need to be enhanced to accommodate for the change. What if the solution developed itself? But how?
Let’s consider the steps that would be involved in making the above possible:
A source control repository would need to be created and a standard branching strategy would need to be defined
The Web API project would need to be created and stored in the repository
The business logic would need to be defined
A pipeline would need to be created to control environmental specific (dev, qa, prod) publishes
Documentation would need to be created and stored somewhere that is centrally accessible
A reusable mechanism to clone a repository, push code and create pull requests would need to exist
A lead developer would need to take responsibility for reviewing changes and approving the pull request so that the pipeline may be kicked off
Most of the above is feasible for automation, considering a few reusable elements that are mentioned in my previous posts as well as my GitHub repo:
What About The Moral Standing of Self-Developing Solutions?
In the space of Automation, the focus on repurposing certain job roles is significant. Automating repetitive tasks has many benefits but the fear of job loss definitely is not one of them.
As developers, we should understand that not even we are safe from the capability that technology has to offer. We, too, need to repurpose our skills towards more meaningful, complex and cognitive tasks. In retrospect, being able to concentrate on research and development instead of supporting an existing solution is the dream that many developers would want to work towards. The ability to pivot towards practically implementing self-developing solutions is what separates good developers from great developers.
Personally, I believe that the benefits of developing intelligently outweigh the shortfalls by far. It has become a new way of approaching problems for me. By remaining conscious of Intelligent Development while architecting and developing solutions, I am enabling myself to work on more innovative projects and supporting less legacy solutions. My hope is that as a collective of developers we strive towards the same goal of continuously adding intelligence to our development process – Let’s take this opportunity to be better together!
So you’ve reached created a database, you’ve created schemas and tables. You’ve managed to create an automation Python script that communicates with the database, however, you’ve reached a crossroad where you need to validate your SQL tables. Some of the validation might include ensuring that the tables exist and were created correctly before referencing them in the script. In the case of a failure, a message should be sent to a Microsoft Teams channel.
If that’s the case, this post might have just the code to help!
What are the steps?
The steps that the script will be following are:
Create a webhook to your Teams channel
If a schema was provided
Get all the tables in the schema
Validate that all the tables have a primary key
If a schema was not provided but tables were
Validate that each table exists
Validate that each table has a primary key
Create a Webhook to a Teams Channel
To create a webhook to your teams channel, click on the three dots next to the channel name and select connectors:
Search for “webhook” and select “Configure” on the Incoming Webhook connector:
Provide your webhook with a name and select “Create”:
Be sure to copy your webhook and then hit “Done”:
Create The Python Script That Does The Validation
Import Libraries
import pandas as pd
import pyodbc
from sqlalchemy import create_engine, event
import urllib
import pymsteams
Create a Connect to SQL Method
The purpose of this method is to allow the reusable method to be accessed each time SQL needs to be accessed.
The method below enables the sending of a teams message to a channel. In order to configure this to work, add the webhook configured above to the method below.
def send_teams_message(text):
webhook = '<add webhook>'
# You must create the connectorcard object with the Microsoft Webhook URL
myTeamsMessage = pymsteams.connectorcard(webhook)
# Add text to the message.
myTeamsMessage.text(text)
# send the message.
myTeamsMessage.send()
Create Validate Primary Keys Method
This method ensures that primary keys are validated and returns the status as well as the text to be sent to MS Teams.
def ValidatePrimaryKeys(engine):
query = "select schema_name(tab.schema_id) as [schema_name], tab.[name] as table_name from sys.tables tab left outer join sys.indexes pk on tab.object_id = pk.object_id and pk.is_primary_key = 1 where pk.object_id is null order by schema_name(tab.schema_id), tab.[name]"
df = pd.read_sql(query, engine)
keysValid = True
text = ""
if len(df) > 0:
text = 'Primary Key Validation Failed.\n\n'
for index, element in df.iterrows():
text += '{} does not have a primary key\n\n'.format(element['table_name'])
keysValid = False
# send_teams_message(text)
return keysValid, text
Create Validate Tables Method
This method validates that the tables exist within the current database and returns the status as well as the text to be sent to MS Teams.
def ValidateTables(engine, tables):
query = "select tab.[name] as table_name from sys.tables tab"
df = pd.read_sql(query, engine)
tables_split = []
tables = tables.replace(' ','')
if ';' in tables:
tables_split = tables.split(';')
elif ',' in tables:
tables_split = tables.split(',')
elif len(tables) != 0:
tables_split = [tables]
text = ""
tablesValid = True
for table in tables_split:
if table not in df['table_name'].tolist() and (table != '' and table != None):
text += 'Table not found in database: {}\n\n'.format(table)
tablesValid = False
if tablesValid:
text = 'Table Validation Passed\n\n'
else:
text = 'Table Validation Failed\n\n' + text
#send_teams_message(text)
return tablesValid, text
Create Validate Schema Method
This method validates that the schema exists. Once the schema is validated, all tables in the schema are retrieved and their primary keys are validated.
def ValidateFromSchema(schemas, engine):
text = ""
tableText = ""
schemas_split = []
schemas = schemas.replace(' ','')
if ';' in schemas:
schemas_split = schemas.split(';')
elif ',' in schemas:
schemas_split = schemas.split(',')
elif len(schemas) != 0:
schemas_split = [schemas]
isValid = True
for schema in schemas_split:
if (isValid):
query = "SELECT schema_name FROM information_schema.schemata WHERE schema_name = '{}'".format(schema)
df = pd.read_sql(query, engine)
if (len(df) > 0):
query = "select t.name as table_name from sys.tables t where schema_name(t.schema_id) = '{}'".format(schema)
df = pd.read_sql(query, engine)
tables = ",".join(list(df["table_name"]))
validateTables = ValidateTables(engine, tables)
isValid = validateTables[0]
tableText += "{}\n\n".format(validateTables[1])
else:
isValid = False
text += "Schema Validation Failed\n\n"
text += "Schema not found in database: {}\n\n".format(schema)
if (isValid):
text = "Schema Validation Passed\n\n"
text = "{}\n\n{}".format(text, tableText)
return isValid, text
Create Validate SQL Method (Equivalent to “main”)
This method acts as the main method and encapsulates all the methods in the correct order and executes the proceeding tasks.
def ValidateSQL(project, server, database, schemas, tables, auth, username, password):
engine = connect_to_sql(server, username, password, database, auth)
summaryText = None
if (schemas != None):
validateSchemas = ValidateFromSchema(schemas, engine)
isValid = validateSchemas[0]
text = validateSchemas[1]
else:
validateTables = ValidateTables(engine, tables)
isValid = validateTables[0]
text = validateTables[1]
if isValid:
summaryText = 'Primary Key Validation Passed\n\n'
validatePrimaryKeys = ValidatePrimaryKeys(engine)
isKeysValid = validatePrimaryKeys[0]
pkText = validatePrimaryKeys[1]
if isKeysValid:
text += summaryText
else:
text += pkText
else:
summaryText = text
text = "<strong><u>{}<u><strong>:\n\n{}".format(project, text)
send_teams_message(text)
return isValid
Calling the Validate SQL Method
The below is how you’d initialise the variables and use them to call the ValidateSQL method.
server = '<server>'
user = '<user>'
password = '<password>'
database = '<database>'
auth = 'SQL' # or Windows
schemas = '<schemas comma separated>'
tables = '<tables comma separated>'
project = '<project>'
ValidateSQL(project, server, database, schemas, tables, auth, user, password)
And that’s a wrap Pandalorians! The Github repo containing the script is available here. Did this help? Did you get stuck anywhere? Do you have any comments or feedback? Please pop it down below or reach out – jacqui.jm77@gmail.com
So you’ve created a database and now you need to make it available for third party access without actually giving people a username and password to the database. You’re familiar with how C# works and how beneficial an ASP.NET Core Web API would be in developing this solution. You have a crazy deadline though and you start wishing that some of the ground work could be automated, allowing you to enter a few parameters and the rest is automagic!
Well here’s a solution that might help and it’s all written in Python!
What are the steps?
The steps that the script will be following are:
Create the project
Scaffold the database
Delete the default template content
Create the controllers
Deep Dive
Let’s dive deeper into the steps listed above.
Create The Project
The project can be created using the dotnet CLI thus being executable from a Python script. This does mean that the dotnet CLI would need to be installed. If it is not yet installed, you can install it from here.
The project creation step consists of a few steps:
Create a reusable “Execute Command” method
Create an ASP.NET Core Web API project
Create a solution (.sln) file
Add the Web API Project to the solution
Initialise Variables
The following library is to be imported and the variables initialised so that the script will work properly:
import os
project_path = r"{}\Projects".format(os.getcwd()) # points to a folder named projects created in the same directory as this script
project_name = "TestScriptWebAPI" # the name of the project (and file that groups the project/solution files together)
start_directory = os.getcwd() # the directory of the script
start_time = datetime.now() # the time that process started
Execute Command
This method allows for any command to be executed, provided that the command and the appropriate file path are provided.
def ExecuteCommand(command, file_path):
# if the file path exists and is not empty, change to the directory else return False and "File path not valid"
if file_path != None and os.path.exists(file_path):
os.chdir(file_path)
else:
return False, "File path not valid" # False depicts that the command did not run successfully due to the invalid file path
command_output = os.popen(command).read() # command is executed
return True, command_output # True depicts that the command was executed successfully, however, it might not be the desired out put which is why the command_output is also returned
Create an ASP.NET Core Web API Project, Solution and Linkage
This method is used to create the project, the solution and the linkage between the two.
def CreateWebAPI(project_name, project_path):
# create the solution path if it doesn't exist yet
solution_path = r"{}\{}".format(project_path, project_name)
if (os.path.exists(solution_path) == False):
os.mkdir(solution_path)
# this is the command that will be run in order to create a new project. Customising the project before creation would occur here
command = "dotnet.exe new webapi --name {} --force".format(project_name)
result = ExecuteCommand(command, project_path)[0]
if result:
print("Project successfully created")
else:
print("Project not created")
# this is the command that will be run in order to create a new sln. Customising the project before creation would occur here
command = "dotnet.exe new sln --name {} --force".format(project_name)
result = ExecuteCommand(command, solution_path)[0]
if result:
print("Solution successfully created")
else:
print("Solution not created")
# this is the command used to add the project to the solution
csproj_path = r"{0}\{1}\{1}.csproj".format(project_path, project_name)
command = 'dotnet.exe sln add "{}"'.format(csproj_path)
solution_path = r"{}\{}".format(project_path, project_name)
result = ExecuteCommand(command, solution_path)[0]
if result:
print("Project successfully added to solution")
else:
print("Project not added to solution")
Now that the project has been created and added to the solution, the appropriate libraries can be installed so that the database can be scaffolded.
Scaffold Database
The database would need to already be created and validation on the database would already need to have happened. The validation would include ensuring that all tables contain primary keys.
Scaffolding the database consists of the following steps:
When spinning up a project, a WeatherForecast.cs model is created along with a controller. These default classes need to be deleted so that they don’t interfere with the project.
This method deletes the template model and controller files that have been created with the project:
def DeleteTemplateFiles(project_path, project_name):
# delete the template WeatherForecast.cs Model class
template_model = r"{}\{}\WeatherForecast.cs".format(project_path, project_name)
if os.path.isfile(template_model):
os.remove(template_model)
# delete the template WeatherForecast.cs Controller class
template_controller = r"{}\{}\Controllers\WeatherForecastController.cs".format(project_path, project_name)
if os.path.isfile(template_controller):
os.remove(template_controller)
Create The Controllers
Creating the controllers requires the following steps:
Get the context name
Get the model
Compile the controller from the template
Create the controllers
Get The Context Name
This method gets the context class name.
def GetContext(file_path):
# the file path should be the path to where the context class was created
f = open(file_path, "r")
context_name = None
for line in f.readlines():
if '_context' in str(line) and 'private readonly' in str(line):
line = line.replace(' ', '')
context_name = line.split(' ')[2]
return context_name
Get The Model
This method gets the model class and returns the class name, attribute as well as the namespace.
def GetModel(file_path):
# file path should depict the path to the Model folder
f = open(file_path, "r")
class_name = None
attributes = []
namespace = None
# for each line in the model class, extract the class name, the attributes and the namespace
for line in f.readlines():
if 'namespace' in str(line):
namespace = line.split(' ')[1].split('.')[0]
if 'public' in str(line):
line = line.replace(' ', '')
split_line = line.split(' ')
if split_line[2] == 'class':
class_name = split_line[3].replace('\n','')
else:
attributes.append(split_line[2])
return class_name, attributes, namespace
Compile The Controller From The Template
This method compiles the controller class from the controller template.
def CreateControllers(solution_path):
# initialise the model_file_path
model_file_path = r"{}\Models".format(solution_path)
# for each model found in the model file, get the model info and use it to create the controller
for file in os.listdir(model_file_path):
if file != 'ErrorViewModel.cs':
file_path = "{}\..\Data".format(model_file_path)
for context in os.listdir(file_path):
context_name = context.replace('.cs','')
file_path = '{}\{}'.format(model_file_path, file)
model_result = GetModel(file_path)
model = model_result[0]
attributes = model_result[1]
id = attributes[0]
namespace = model_result[2]
path = r'{}\Controllers'.format(solution_path)
file_path = r"{}\{}Controller.cs".format(path, model)
template_file_path = r"{}\APIControllerTemplate.cs".format(start_directory)
if os.path.exists(path) == False:
os.mkdir(path)
CompileControllerTemplate(model, attributes, file_path, template_file_path, namespace, context_name, id)
Now you can hit F5 and test your API functionality. Did you get stuck anywhere or do you have feedback? Please feel free to drop it below.
Future Enhancements
Future enhancements of this code include adding SQL validation prior to the web API project creation as well as the implementation of a design pattern through the use of templates, similar to the way it was done in order to create controllers.
Are you in a situation where you’re using a SQL database and you’ve already designed, created and gone through all 700 iterations of the review? Primary keys already created of type uniqueidentifier with default value of NEWID(). Then Fred comes with his clever idea of adding some sort of automation workflow on top of the database – you know, like Power Automate?
So you go down a tangent of getting info together around Power Automate triggers and actions, specific to SQL Server. You come across the two triggers: When an item is created (V2) and when an item is modified (V2).
You decide on using SQL Server “When an item is created (V2)” as a trigger and Office 365 Outlook “Send an Email (V2)” as an action.
You go into Power Automate and you connect it up to your database but when you hit test and “I’ll perform the trigger action”, you wait… And wait… And wait… ? … ? … And wait. Nothing’s happening?
You do more research and you find that most people use a primary key of type int and it works for them. Now you are running ahead with everything that needs to change in order to get this to work. You think of workarounds like creating views, using a different trigger, using a different automation type, heck… Even using a different workflow tool. But wait! That’s not necessary.
Here’s what to do:
Alter your table design in SQL Server Management Studio (SSMS)
Add another row, named something like “PowerAutomateID”
Make the type int
Untick “Allow Null”
Scroll down in the table properties and set Identity Specification to Yes
Save
Go test your flow
There’s no need to make the new column a Primary Key as the Power Automate trigger looks for the column that has Identity Specification set. I had a web form built on top of my database and none of the CRUD (create, read, update and delete) functionality had to be changed. It all continued to work properly, even the Power Automate process.
This is definitely something I wish I knew when I first went on this journey a few weeks ago.
If you hadn’t come across this in your research, you would most likely have loved to have landed on the Microsoft Docs that explain some of the limitations.
Still not working or may you found a better way? Drop a comment below or send an email through to jacqui.jm77@gmail.com
As a Developer in an Agile environment, I want to appreciate the Business Analysts (BA) so that we can continue working in a respectful and collaborative manner.
Sounds like a user story right? Well maybe it should be so that we can persistently practise this principle. Often we are faced with feuds between colleagues in the tech industry (most other industries have this too) due to a difference in opinion or even work ethics. Sometimes personalities clash, the pressure and tension might get a little too much, sometimes requirements are nearly impossible to meet in the unrealistic timelines presented to the stakeholders and sometimes those timelines were decided upon without consulting the technical team.
All of the above can be considered valid points and sometimes they raise valid concerns. Granted, it is a two way street – it is important to maintain bidirectional respect between developers and BAs.
I used to be a developer who was quick to judge non-technical colleagues based on the textbook description of their job title and the reality of how they carry it out. I have recently started working with a team of BAs who have fundamentally changed the way I see non-technical colleagues (especially BAs) who set out to do the best job that they possibly can do, in-line with their job description!
That’s good and all but what is a BA really? A Business Analyst is an agent of change. They work together with organisations to help them improve their processes and systems. As developers and technical “resources” we tend to make their life a lot more complicated than that.
Those involved in Robotic Process Automation (RPA) projects, would understand why I would say that the bulk of the work lies in the process understanding phase which is generally conducted by the BA (and in some instances the Solution Architect).
I have found that developers often look down on BAs as many of them don’t possess the skill to write code. Now based on what I have experienced working with a team of highly talented BAs, I have come to realise that the greatest difference between BAs and the developers is that they communicate the same process in different ways (or languages).
Although a developer is able to translate a business requirement specification, process definition document or even a functional specification document into code, it is important to remember that it remains an in-depth depiction of the process outlined by the BA.
Okay cool… But how can I do my part as a developer? Good question! Here are a few things you should keep in mind when working with BAs:
We enable those who enable us. A good BA may come across as someone who tries to make your life easier when in actual fact, they are trying to simplify a process. Help your BA help you by providing assistance with some technical aspects of a situation if and where possible.
Appreciating and respecting anyone will generally result in improved efforts made on a project. It is a common reaction to respond more positively when feeling respected and appreciated.
Be patient. Many BAs may have an ambition to gather more technical understanding around certain things that you have insight over, however, they may not always understand your predicament. The best advice I have gotten over the last months is to ELI5 (Explain like I’m 5). It has helped me tremendously in translating the highly technical explanations in a way that removes frustration for myself (as the developer) and the BA(s). In the process of ELI5, enlighten and enable the BA to understand the situation for future reference. If explained well enough, once is enough and twice is a revision.
Learn. There is always an opportunity to learn new things. There’s nothing stopping the devs from understanding a BAs process or a BA understanding the devs process.
Stick together. I cannot emphasise how important this point is. Dealing with the business (internally or externally) is strenuous enough, dealing with tension and pressure internally can often cause conflict between colleagues too. Sticking together against stakeholders and sharing the pressure as a team helps keep the team positive and often removes some of the additional pressure. Sometimes it helps just knowing that you’re not in something alone.
What happens when you have a personality clash with the BA? Get a new one? No, it doesn’t always work like that. Like I have mentioned above (and will mention further down too), this remains a two way street. Sometimes it is a good idea to discuss an ideal working situation with the BA that you may be clashing with. It is important that all parties respect and participate in the above. The responsibility does not solely lie with the developer, however, a first step goes a long way!
Great… But how do you spark a great working relationship? Get your colleague a coffee, or tea, or hot chocolate… Or water. Or a bunch of sweets.
To the BAs and other non-techies / semi-techies out there, thank you for the work that you do! You are appreciated and hopefully your dev team will show their appreciation too. Feel free to reach out to them too. It’s a two way street.
Let’s take this opportunity to be better together!