[Python]: DIY Usage Monitoring with Python

Level of Difficulty: Beginner to Intermediate.

This blog post is aimed at providing basic (automated) server (or computer) monitoring by using a Python script that can write to a SQL Server database or to a CSV.

It is important to note that there are various platforms that allow for advanced server monitoring along with a mature system, purchasable for a “mature” price. This post is an introduction to how basic usage monitoring may be done through the use of Python libraries and functionality.

What are the steps?

The steps that we will be following are:

  1. Create the Python script
    1. Configure data source connection (SQL Server or CSV storage)
    2. Import necessary libraries (based on data source)
    3. Tweak usage metrics to be monitored
    4. Tweak any alerting (if and where necessary)
  2. Schedule the script to run automatically
    1. Create the Windows task scheduled job

Deep Dive

Let’s dive deeper into the steps listed above.

Create The Python Script – CSV Integration

Ensure that the pandas and psutil libraries are installed. You can install these libraries by executing the following commands:

pip install pandas
pip install psutil

Import libraries

Use the code below to import the correct libraries:

from datetime import datetime
from subprocess import call

import pandas as pd
import psutil
import numpy as np

Create Record Metrics Method – Written to CSV

The code that we will be using in this post will monitor the following metrics:

  • CPU percentage
  • Virtual memory
  • Drive usage
  • Last Booted Time

Other metrics that can be monitored using the psutil library include:

  • Disks
  • Network
  • Sensors
  • Other system info
  • Process management
  • Windows services

Examples of the above mentioned functionality can be found here.

Use and adapt the code below as you see fit, in order to create a method with functionality that records the following metrics:

def RecordMetrics():

    # gets the disk partitions in order to get a list of NFTS drives
    drps = psutil.disk_partitions()
    drives = [dp.device for dp in drps if dp.fstype == 'NTFS']
    
    # reads the existing csv
    df_csv = pd.read_csv('ServerMonitoring.csv')
    
    # assigns a new history ID for data integrity purposes
    history_id = max(df_csv['HistoryID']) + 1    

    # initialises the data frame with the appropriate values
    df = pd.DataFrame(
        {
            'CPU': [psutil.cpu_percent()],
            'VirtualMemory': [psutil.virtual_memory()[2]],
            'HistoryID' : [history_id],
            'LastBootedAt' : datetime.fromtimestamp(psutil.boot_time()).strftime('%Y-%m-%d %H:%M:%S')
            
        })

    # records the drive usage for each drive found
    for drive in drives:
        df['{}_DriveUsage'.format(drive.replace(":\\",""))] = psutil.disk_usage(drive)[3]

    # adds the current date and time stamp
    df['LoadDate'] = datetime.now()
    
    # appends the data to the existing csv file
    df = df.append(df_csv)
    df.to_csv('ServerMonitoring.csv', index=False)

Create The Python Script – Database Integration

Ensure that the following are installed:

  • urllib
  • pandas
  • sqlalchemy
  • psutil
  • pyodbc

You can install these libraries by executing the following commands:

pip install urllib
pip install psutil
pip install pandas
pip install sqlalchemy
pip install pyodbc

Import libraries

Use the code below to import the correct libraries:

from sqlalchemy import create_engine, event
from datetime import datetime
from subprocess import call

import urllib
import pandas as pd
import psutil
import pyodbc
import numpy as np

Create Record Metrics Method – Written to Database

The code that we will be using in this post will monitor the following metrics:

  • CPU percentage
  • Virtual memory
  • Drive usage
  • Last Booted Time

Other metrics that can be monitored using the psutil library include:

  • Disks
  • Network
  • Sensors
  • Other system info
  • Process management
  • Windows services

Examples of the above mentioned functionality can be found here.

Use and adapt the code below as you see fit, in order to create a method with functionality that records the following metrics:

def RecordMetrics():

    # dataabase connection - assign to the correct values
    driver = '{SQL Server}'
    server = ''
    user = ''
    password = ''
    database = ''
    table = 'ServerMonitoring'

    params = urllib.parse.quote_plus(r'DRIVER={};SERVER={};DATABASE={};UID={};PWD={}'.format(driver, server, database, user, password))
    conn_str = 'mssql+pyodbc:///?odbc_connect={}'.format(params)
    engine = create_engine(conn_str)

    # gets the disk partitions in order to get a list of NFTS drives
    drps = psutil.disk_partitions()
    drives = [dp.device for dp in drps if dp.fstype == 'NTFS']

    # initialises the data frame with the appropriate values
    df = pd.DataFrame(
        {
            'CPU': [psutil.cpu_percent()],
            'VirtualMemory': [psutil.virtual_memory()[2]],
            'LastBootedAt' : datetime.fromtimestamp(psutil.boot_time()).strftime('%Y-%m-%d %H:%M:%S')
        })

    # records the drive usage for each drive found
    for drive in drives:
        df['{}_DriveUsage'.format(drive.replace(":\\",""))] = psutil.disk_usage(drive)[3]

    # adds the current date and time stamp
    df['LoadDate'] = datetime.now()
    
    #if_exists="replace" if the table does not yet exist, then add HistoryID (or ID) as the auto-incremented primary key
    df.to_sql(table, engine, if_exists="append", index=False)

Create CPU Alerting Method – Database Integration

The method below pulls the last five records from SQL, gets an average and based on the average, makes a decision. The tolerance may be set accordingly and the print may be replace with instant message notifications (available here):

def CheckCPU():
    
    df_CPU = df.read_sql('SELECT TOP(5) CPU FROM dbo.ServerMonitoring ORDER BY LoadDate DESC', conn_str)
    
    avg_CPU = df_CPU['CPU'].mean()
    
    # if the CPU Average is above 90%
    if avg_CPU > 90:
        
        print('High CPU Usage Detected: {}%'.format(round(avg_CPU,2)))       

Create Boot Time Alerting Method – Database Integration

The method below pulls the last boot time record from SQL, and makes a decision accordingly. The tolerances may be set accordingly and the print may be replace with instant message notifications (available here):

def CheckBootTime():
    
    df_CPU = df.read_sql('SELECT TOP(1) LastBootedAt FROM dbo.ServerMonitoring ORDER BY LoadDate DESC', conn_str)
    
    current_timestamp = datetime.now()
    
    up_time = current_timestamp - boot_time
    
    days = str(up_time).split(" days, ")[0]
    time = str(up_time).split(" days, ")[1]

    hours = time.split(':')[0]
    mins = time.split(':')[1]
    secs = time.split(':')[2]

    # if PC last rebooted in the last 9 hours
    if days == 0 and hours < 10:

        print('Server was rebooted {} hours and {} minutes ago'.format(hours, minutes))

That’s it folks! The Github repo is open for contribution so please do feel free to contribute or leave a comment below on how your solution turns out!

If you have any questions, issues or feedback, drop a comment below or reach out to jacqui.jm77@gmail.com

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: