A Full Drive at 2 AM Is Never Just a Full Drive
Last quarter, one of our managed customers had a file server hit 99% disk utilization on a Saturday night. The application logs stopped writing, their ERP system threw cryptic errors, and by Monday morning the helpdesk was fielding forty tickets that all traced back to a single C: drive nobody was watching. The fix took ten minutes. The damage to Monday productivity took considerably longer to undo. That engagement is why I now treat the ability to monitor disk space with PowerShell as non-negotiable infrastructure hygiene for every client environment we manage at SSE.
The frustrating part is that disk space monitoring is a solved problem. The tools exist natively in PowerShell. The cost is nearly zero. Yet it remains one of the most common gaps we find when onboarding new accounts.
Querying Disk Space: The Foundation
Before you can alert on anything, you need a reliable way to pull disk statistics from one or many machines. PowerShell gives you two solid approaches, and which one you pick depends on whether you are querying locally or across the network.
Local and Single-Server Queries with Get-CimInstance
The Get-CimInstance cmdlet against the Win32_LogicalDisk class is the workhorse here. It returns size, free space, and drive type in a format you can reshape to suit any reporting need. Here is the pattern we deploy across most client environments:
$cimParams = @{
ClassName = 'Win32_LogicalDisk'
Filter = "DriveType=3"
ComputerName = $env:COMPUTERNAME
}
Get-CimInstance @cimParams |
Select-Object @{Name='Computername'; Expression={$_.SystemName}},
DeviceID,
@{Name='SizeGB'; Expression={[math]::Round($_.Size / 1GB, 2)}},
@{Name='FreeGB'; Expression={[math]::Round($_.FreeSpace / 1GB, 2)}},
@{Name='PctFree'; Expression={[math]::Round(($_.FreeSpace / $_.Size) * 100, 1)}}
The DriveType=3 filter limits results to local fixed disks, which is almost always what you want. Including network or removable drives tends to pollute your monitoring data with irrelevant entries. The calculated properties convert raw bytes into gigabytes and a percentage, which makes the output immediately useful to both scripts and humans reading a report.
Multi-Server Queries with Invoke-Command
When you manage dozens or hundreds of servers, which is the reality for most of our client engagements, you need to fan out the query. Invoke-Command handles this naturally because it runs against multiple computer names in parallel:
$servers = @('FileServer01', 'AppServer02', 'SQLProd03')
Invoke-Command -ComputerName $servers -ScriptBlock {
Get-CimInstance -ClassName Win32_LogicalDisk -Filter "DriveType=3" |
Select-Object @{Name='Computername'; Expression={$_.SystemName}},
DeviceID,
@{Name='SizeGB'; Expression={[math]::Round($_.Size / 1GB, 2)}},
@{Name='FreeGB'; Expression={[math]::Round($_.FreeSpace / 1GB, 2)}},
@{Name='PctFree'; Expression={[math]::Round(($_.FreeSpace / $_.Size) * 100, 1)}}
} -ErrorAction SilentlyContinue -ErrorVariable remoteErrors
Notice the -ErrorVariable parameter. In production environments, some servers will be unreachable, restarting, or firewalled. Silently swallowing those errors without logging them is a mistake we see repeatedly. Capture them, log them, and treat an unreachable server as its own alert condition.
If you are running this across a large server fleet and need to control concurrency, the ThrottleLimit parameter in PowerShell is worth understanding before you accidentally saturate your network with 500 simultaneous WMI queries.
Building the Alert Logic
Raw disk data is useful for dashboards and reports. But the real value is in the alert: telling someone before the drive fills up, not after. I am opinionated about this: threshold-based alerting on disk space should be percentage-based for most drives and absolute-GB-based for large volumes. A 10TB data volume at 90% full still has a terabyte of headroom. A 60GB system drive at 90% full has six gigabytes and might be hours from failure under heavy logging.
A Practical Threshold Function
function Get-DiskSpaceAlert {
[CmdletBinding()]
param(
[Parameter(Mandatory)]
[string[]]$ComputerName,
[int]$PctThreshold = 15,
[int]$GBThreshold = 10
)
foreach ($computer in $ComputerName) {
try {
$disks = Invoke-Command -ComputerName $computer -ScriptBlock {
Get-CimInstance -ClassName Win32_LogicalDisk -Filter "DriveType=3"
} -ErrorAction Stop
foreach ($disk in $disks) {
$freeGB = [math]::Round($disk.FreeSpace / 1GB, 2)
$pctFree = [math]::Round(($disk.FreeSpace / $disk.Size) * 100, 1)
if ($pctFree -lt $PctThreshold -or $freeGB -lt $GBThreshold) {
[PSCustomObject]@{
ComputerName = $disk.SystemName
Drive = $disk.DeviceID
SizeGB = [math]::Round($disk.Size / 1GB, 2)
FreeGB = $freeGB
PctFree = $pctFree
Status = 'WARNING'
}
}
}
}
catch {
Write-Warning "[$($computer.ToUpper())] $($_.Exception.Message)"
}
}
}
This function uses a dual-threshold approach. The default fires when free space drops below 15% or below 10 GB, whichever trips first. The try/catch block ensures that an unreachable machine writes a warning rather than killing the entire monitoring run. During a compliance audit for a client in the financial services space, we found that their previous monitoring script had no error handling at all. One offline server had been silently skipping checks for three months.
Sending Email Alerts
Once you have identified drives in a warning state, the next step is notification. PowerShell’s Send-MailMessage cmdlet is the fastest path to email alerts, though I should note Microsoft has marked it as obsolete in favor of newer MailKit-based approaches. For internal SMTP relays, which is what most of our Windows-centric clients run, it still works fine.
$alerts = Get-DiskSpaceAlert -ComputerName $servers
if ($alerts) {
$body = $alerts |
ConvertTo-Html -Property ComputerName, Drive, SizeGB, FreeGB, PctFree -Title 'Disk Space Alerts' |
Out-String
$mailParams = @{
From = '[email protected]'
To = '[email protected]'
Subject = "Disk Space Alert - $($alerts.Count) drive(s) below threshold"
Body = $body
BodyAsHtml = $true
SmtpServer = 'smtp.company.local'
Priority = 'High'
}
Send-MailMessage @mailParams
}
The ConvertTo-Html pipeline turns the alert objects into a readable HTML table, which is far easier to scan in an inbox than raw text. We include the count of affected drives in the subject line so that on-call staff can triage severity from their phone notification without opening the email.
Beyond Email: Teams and Webhook Alternatives
For clients who have moved their alerting to Microsoft Teams or Slack, swapping out the email block for a webhook call is straightforward. The core monitoring logic stays identical. Only the delivery mechanism changes, which is exactly why separating the detection function from the notification function matters architecturally.
Scheduling and Scaling the Script
A monitoring script that only runs when someone remembers to execute it is not monitoring. It is a diagnostic tool. The distinction matters.
Use Task Scheduler or, better yet, a Group Policy-deployed scheduled task to run the script on a regular interval. For most environments, every 30 minutes strikes the right balance between visibility and noise. Critical database servers might warrant a five-minute interval. Archive storage that barely changes can get away with twice daily.
Here is a minimal Task Scheduler registration:
$action = New-ScheduledTaskAction -Execute 'PowerShell.exe' `
-Argument '-NoProfile -ExecutionPolicy Bypass -File C:\Scripts\DiskMonitor.ps1'
$trigger = New-ScheduledTaskTrigger -RepetitionInterval (New-TimeSpan -Minutes 30) `
-At '00:00' -Once
$principal = New-ScheduledTaskPrincipal -UserId 'SYSTEM' -RunLevel Highest
Register-ScheduledTask -TaskName 'DiskSpaceMonitor' -Action $action `
-Trigger $trigger -Principal $principal -Description 'Monitors disk space and sends alerts'
Run it as SYSTEM if you are monitoring local drives, or as a domain service account with appropriate permissions if querying remote machines. Right-sizing the service account permissions is worth the effort. A monitoring account that has local admin on every server is a liability your security team will rightfully flag.
The Caveat: When PowerShell Is Not Enough
I want to be direct about where this approach hits its ceiling. A PowerShell-based disk monitor works well for small to mid-size environments, roughly up to 200 servers. Beyond that, the lack of centralized dashboarding, historical trending, and correlation with other metrics starts to hurt. At that scale, you are better served by a dedicated monitoring platform. Products like Veeam ONE or PRTG give you disk monitoring alongside backup health, replication status, and capacity forecasting in a single pane.
But for the 10-to-100 server environments that make up the majority of our backup and managed infrastructure client base, a well-written PowerShell script running on a schedule is the right tool at the right cost. It costs nothing to license, takes an afternoon to deploy, and prevents the kind of silent disk exhaustion that turns a quiet weekend into a Monday crisis.
Putting It All Together
The complete workflow looks like this:
| Component | Purpose | Frequency |
|---|---|---|
Get-CimInstance query |
Collect disk metrics | Every run |
| Threshold evaluation | Identify at-risk drives | Every run |
| Email / webhook alert | Notify operations team | Only on threshold breach |
| Task Scheduler | Automate execution | Every 30 minutes |
| CSV / log export | Historical capacity data | Daily roll-up |
Add a daily CSV export of all disk metrics and you have rudimentary capacity trending that can inform storage procurement decisions months before a crisis hits.
Disk space monitoring is not glamorous work. But the cost of not doing it is always higher than the cost of a few hours building the script. If your environment does not have this in place today, start with the single-server Get-CimInstance query, prove it works, then expand to multi-server with alerting. If you need help deploying this across a managed environment, reach out to our team and we will scope it properly.


