This guide covers PowerShell automation audit that every IT professional should know.
I used to spend two hours every Friday manually reviewing scripts before they hit production. Scan for error handling. Check the logging. Run through a mental list of twelve items and hope I remembered everything. Three times in six months, I still pushed something that broke over the weekend. The fourth time, I built this audit instead.
This is a five-checkpoint readiness audit for PowerShell automation. Run through each checkpoint before any script goes to production. Pass all five and you can deploy with confidence. Fail any one and you know exactly what to fix.
The Scope of This Audit — Powershell Automation Audit
This audit applies to PowerShell scripts and modules built for production automation – user provisioning, scheduled tasks, configuration management, anything that runs unattended. Scripts you run interactively and throw away don’t need this level of scrutiny. Production automation absolutely does.
These patterns are written for PowerShell 7.2+. A few behave differently on Windows PowerShell 5.1 – the try/catch behavior around non-terminating errors being the main one – and I’ll flag those where it matters. Understanding PowerShell automation audit is key here.
Checkpoint 1 – Error Handling Is Explicit
Here’s the version I see most often in the wild:
# Version 1 - the "fingers crossed" approach
Get-ADUser -Identity $username | Set-ADUser -Department "IT"
That script silently fails if the user doesn’t exist. No error surfaced. No log entry written. Nobody knows anything went wrong until a manager files a ticket asking why someone’s department is still wrong two weeks later.
Silent failures are worse than crashes. A crash is obvious. A silent failure lets bad state accumulate for days before anyone notices – and by then the cleanup is ten times harder than the original fix would have been.
Pass criteria: every external call wrapped in try/catch, $ErrorActionPreference = 'Stop' at the top of the script, and errors written to a persistent log – not just the console. Understanding PowerShell automation audit is key here.
# Version 2 - production-ready
$ErrorActionPreference = 'Stop'
try {
$user = Get-ADUser -Identity $username
$user | Set-ADUser -Department "IT"
Write-Log "Updated department for $username" -Level Info
}
catch {
Write-Log "FAILED updating $username - $_" -Level Error
throw
}
Fail indicator: bare cmdlet calls with no error handling.
Remediation: Add $ErrorActionPreference = 'Stop' at the top of every script. Wrap external calls in try/catch. This is not optional.
Checkpoint 2 – Parameters Are Validated
Scripts that accept user input need parameter validation. Full stop. I’ve watched admins pass blank strings, wrong data types, and values that made no sense to the cmdlet – and the script just tried to run with them anyway.
Pass criteria: [Parameter(Mandatory)] on required parameters, type constraints like [string] or [int], and [ValidateNotNullOrEmpty()] where empty values would break logic. Use [ValidateSet()] to restrict inputs when values are predictable. Understanding PowerShell automation audit is key here.
For deeper coverage of parameter patterns – including dynamic parameter sets and advanced validation techniques – the article on PowerShell Parameters, Validation & Dynamic Parameters covers exactly this ground.
Fail indicator: any production script with a bare param($username) and no type constraints or validation attributes.
Checkpoint 3 – Logging Is Structured and Persistent
Write-Host is not logging. I will die on this hill. Write-Host outputs to the console and disappears the moment the session ends. If your scheduled task runs at 3am and fails, Write-Host tells you nothing the next morning. You need a log file.
Pass criteria: timestamps on every entry, severity levels, output to a file or log service that persists after the session ends. If the script runs as a scheduled task at 3am, the only evidence of what happened is that log file. Understanding PowerShell automation audit is key here.
function Write-Log {
param(
[string]$Message,
[string]$Level = 'Info'
)
$entry = "[$(Get-Date -Format 'yyyy-MM-dd HH:mm:ss')] [$Level] $Message"
Add-Content -Path $script:LogPath -Value $entry
}
This works well for single-server scripts. For distributed automation running across many machines, you need centralized log collection – a single place to search when something fails at 3am and you don’t know which server to look at first.
Fail indicator: any use of Write-Host as the only output mechanism in a production script. This is essential for PowerShell automation audit.
Remediation: Build a Write-Log function once, set a log file path, replace every Write-Host with Write-Log. Takes about 20 minutes per script and saves hours later.
Checkpoint 4 – Scripts Have Pester Tests
Untested automation scripts are technical debt with a countdown timer. Every one will fail in a way you didn’t predict. That’s not pessimism – it’s the observable pattern across every team I’ve worked with. Tests are how you catch that before production does, and skipping them is a choice to pay the price later with interest. Understanding PowerShell automation audit is key here.
Pass criteria: at minimum, one Pester 5.x test file per module. Tests should cover the happy path, at least one error condition, and any branching logic. Use the Arrange-Act-Assert pattern – it keeps tests readable and makes failures obvious to diagnose.
Describe "Update-UserDepartment" {
Context "Valid user exists" {
It "Updates department successfully" {
# Arrange
Mock Get-ADUser { return [PSCustomObject]@{ SamAccountName = 'jsmith' } }
Mock Set-ADUser {}
# Act
Update-UserDepartment -Username 'jsmith' -Department 'IT'
# Assert
Should -Invoke Set-ADUser -Times 1
}
}
}
But Pester tests require cleanly structured functions first. If your scripts mix everything into one long file, the guide on PowerShell Scripts, Functions, and Script Blocks Guide covers the foundations you need before tests will actually be useful.
Fail indicator: no .Tests.ps1 files anywhere near your automation scripts.
Checkpoint 5 – Code Has Been Reviewed
I used to skip code review on “quick” scripts. Just a small change, I know what it does. I regretted that every single time. There is no such thing as a script too small to deserve a second pair of eyes before it runs against production Active Directory. Understanding PowerShell automation audit is key here.
Pass criteria: a second person has reviewed the script, or you’ve run PSScriptAnalyzer against it. PSScriptAnalyzer catches style violations, deprecated cmdlets, and common mistakes automatically. It’s not a replacement for a human reviewer – but it catches the obvious stuff before someone else has to.
Install-Module -Name PSScriptAnalyzer -Force
Invoke-ScriptAnalyzer -Path .\Update-UserDepartment.ps1
Fail indicator: scripts written solo and deployed without any review step.
Audit Results: What to Fix First
Not all failures are equal. Here’s how to prioritize remediation once you’ve run through the checkpoints:
Fix immediately: Checkpoints 1 and 2. Unhandled errors and unvalidated input cause real damage. A script that silently fails is worse than one that doesn’t run at all – at least the latter makes the problem obvious. Understanding PowerShell automation audit is key here.
Fix this sprint: Checkpoint 3. No logging means no investigation path when things break. And they will break at 2am.
Build into your workflow: Checkpoints 4 and 5. Testing and review take time upfront – but they catch problems that production will find otherwise, at the worst possible time.
Most teams fail Checkpoint 1 first. Error handling gets skipped when scripts are written under deadline pressure and never revisited. Go through your scheduled tasks. Find the ones missing try/catch. That’s where to start.
If you want help auditing your existing automation or building production-grade PowerShell practices for your team, working with an experienced consulting team can cut months off the setup time – or get in touch with SSE directly to talk through your environment.


