The Deployment That Shouldn’t Have Gone Live
During an incident response engagement last year, we pulled the deployment logs for a mid-sized financial services company and found something that should have stopped everyone cold: a PowerShell script with hardcoded service account credentials had been pushed directly to production. Twice. The account held domain admin rights. The script had been sitting in a semi-public internal repository for eleven days before anyone noticed. That’s not a DevOps failure — that’s a DevSecOps failure, and it’s exactly the kind of gap that DevSecOps best practices exist to close before the call comes in at 2AM.
DevOps Velocity Is Real. So Is the Attack Surface It Creates.
The Puppet Labs State of DevOps report, based on surveys of over 4,000 IT operations professionals, puts numbers to what security teams already suspect: companies with mature DevOps practices ship code 30 times faster, complete deployments 8,000 times faster than peers, experience 50% fewer failures, and restore service 12 times faster after an incident. Those gains are real. We’ve seen similar trajectories at client environments running PowerShell DSC-based pipelines.
But velocity without security gates is how you ship a backdoor. The faster you deploy, the faster a compromised build artifact reaches production. The attack surface doesn’t shrink because your pipeline is elegant — it expands proportionally with deployment frequency unless you deliberately build security into each stage.
Rebuilding a Pipeline from Scratch: What We Actually Did
A client in the professional services sector came to us with a growing development team and zero security integration in their CI/CD pipeline. They were pushing to Azure DevOps, deploying PowerShell scripts to manage Active Directory and server configurations, and their entire security review process was a Slack message to the senior sysadmin. We rebuilt their pipeline over six weeks. Here’s the exact sequence, including what broke along the way.
Step 1: Static Analysis as a Hard Gate (PSScriptAnalyzer + InjectionHunter)
The first control we added was PSScriptAnalyzer — Microsoft’s static analysis tool for PowerShell scripts and modules. Install it once, run it on every commit, fail the build on errors:
# Install PSScriptAnalyzer in your pipeline agent environment
Install-Module -Name PSScriptAnalyzer -RequiredVersion 1.21.0 -Force
# Run against your script directory — catches style violations AND security anti-patterns
Invoke-ScriptAnalyzer -Path .\scripts\ -Severity Error,Warning -Recurse
PSScriptAnalyzer catches credential exposure patterns, unsafe string construction, and execution policy bypasses. What it won’t catch is injection vulnerabilities in scripts that accept user-controlled input. That’s where InjectionHunter fills the gap. It’s specifically built to identify patterns where attacker-controlled data could be passed into dangerous PowerShell constructs — the kind of code path that maps directly to MITRE ATT&CK T1059.001:
# Install InjectionHunter
Install-Module -Name InjectionHunter -Force
# Measure injection risk in a specific deployment script
Measure-InjectionRisk -Path .\scripts\deploy.ps1
We caught three injection-prone patterns in the client’s existing scripts on the first run. None had ever surfaced in a code review. The developers weren’t incompetent — they simply weren’t looking for that class of problem.
One early mistake: we initially set both tools to warn rather than block. The team ignored the warnings within two weeks. Change it to a hard pipeline failure. If PSScriptAnalyzer or InjectionHunter throws an error, the build stops. Non-negotiable.
Step 2: Eliminating Hardcoded Credentials — All of Them
Every client environment we’ve audited over the past three years has contained at least one script with credentials embedded in plain text. Service account passwords, API keys, database connection strings — sometimes committed to version control and sitting in git history long after the developer thought they’d removed them.
For this client’s Azure-based workloads, we migrated all secrets to Azure Key Vault and modified scripts to retrieve credentials at runtime only, scoped to the executing service principal identity:
# Retrieve secret from Azure Key Vault — no credential lives in the script file
# The service principal's managed identity handles authentication
$secret = Get-AzKeyVaultSecret -VaultName "prod-keyvault" -Name "SqlServiceAccount" -AsPlainText
# Credential exists only in memory for the duration of this execution
$credential = [PSCredential]::new(
"svc-sql",
(ConvertTo-SecureString $secret -AsPlainText -Force)
)
For on-prem environments without Azure, Windows Credential Manager or HashiCorp Vault provides equivalent separation. The rule is simple: if the credential appears in the script file, it’s compromised. Not potentially compromised. Compromised. Treat it that way.
Step 3: Just Enough Administration for Every Deployment Account
JEA (Just Enough Administration) is the most underused privilege control in Windows environments. It constrains what a remote PowerShell session can do — restricting available cmdlets, parameters, and command construction — for a given role. We created JEA role capability files for every service account running automated deployment tasks:
# Create a JEA Role Capability file for the deployment service account
New-PSRoleCapabilityFile -Path .\JEA\DeploymentRole.psrc `
-VisibleCmdlets @(
@{ Name = 'Restart-Service'; Parameters = @{ Name = 'Name'; ValidateSet = 'AppPool','W3SVC' } },
@{ Name = 'Get-Service'; Parameters = @{ Name = 'Name'; ValidateSet = 'AppPool','W3SVC' } }
) `
-VisibleFunctions 'Get-DeploymentStatus' `
-ScriptToProcess '.\JEA\Startup.ps1'
The deployment account could restart exactly two services. Nothing else. When a threat actor compromised that account three months later through a targeted phishing attack, lateral movement was structurally blocked. They held valid credentials with nowhere to go — a textbook T1078 (Valid Accounts) attempt that JEA neutralized before we even received the alert.
The Audit Trail That Saved a Compliance Review
Six months into the engagement, the client went through an ISO 27001 certification audit. The auditors asked for evidence of change control on production scripts. Because we’d configured PowerShell script block logging and module logging from day one, we produced a complete record of every script executed, by which account, from which host, and what it did. No scrambling. No gaps. The auditors moved on inside an hour.
Enable this through Group Policy under Computer Configuration → Administrative Templates → Windows Components → Windows PowerShell. Enable both “Turn on PowerShell Script Block Logging” and “Turn on Module Logging,” then forward events (Event IDs 4103 and 4104) to your SIEM. This takes about fifteen minutes to configure and deploy at scale. It is not optional.
For a detailed breakdown of what post-incident forensics looks like when logging is properly in place — including which log sources matter and how to collect them under pressure — the Digital Forensics for Incident Response field guide covers the exact workflow we use during IR engagements.
Where Most Teams Actually Stall
Here’s a position I’ll defend directly: most DevSecOps programs fail at code review, not tooling. Adding PSScriptAnalyzer and InjectionHunter to a pipeline takes an afternoon. Getting developers to meaningfully review each other’s scripts for security issues requires a process change and a checklist anchored to something like the OWASP Secure Coding Practices framework — because a developer reviewing deployment automation won’t naturally look for T1078 abuse patterns or unsanitized variable injection.
The fix is structural. Require that any script touching privileged accounts or accepting external data goes through a second reviewer working from a security-specific checklist, not a general code quality review. Automate what tooling can catch. Manual review handles business logic flaws, abuse-case scenarios, and anything requiring threat modeling intuition.
A caveat worth naming clearly: no toolchain is complete. PSScriptAnalyzer misses logic-layer vulnerabilities. InjectionHunter won’t flag a script that exfiltrates data through a legitimate API call that happens to be attacker-controlled. Static analysis is one layer in a kill chain-aware defense, not a substitute for threat modeling the pipeline itself.
Build Artifacts and Backup State Are Attack Targets
One gap that rarely makes it into DevSecOps conversations: your deployment packages and configuration state backups are high-value targets. An attacker who can tamper with a build artifact before it reaches production bypasses every gate you’ve built upstream. We’ve seen a supply chain attempt against a client’s internal package repository — a textbook T1195 (Supply Chain Compromise) that nearly succeeded because unsigned packages were being pulled and executed without verification.
Sign your scripts with a code signing certificate. Enforce execution policy at the endpoint to reject unsigned code. And store your configuration state — DSC configurations, JEA role files, Group Policy backups — in infrastructure that’s isolated from your production attack surface. A properly segmented backup solution means that even if your deployment pipeline is compromised end to end, you have a verified clean state to recover from. This isn’t a DevOps afterthought. It’s a recovery time objective problem.
Forrester research on DevSecOps maturity consistently shows that organizations treating configuration backup and recovery as first-class pipeline concerns recover from pipeline compromises in hours rather than days. The ones that don’t treat it that way are the ones calling us at 2AM.
What the Velocity Numbers Actually Mean for Security
Returning to those Puppet Labs figures: 50% fewer failures and 12x faster service restoration aren’t just operational metrics. From a security posture perspective, they’re resilience metrics. A team that restores service 12 times faster also recovers from a ransomware hit faster. Smaller, more frequent deployments mean smaller change sets, which means faster root cause identification after an incident — because the blast radius of any single change is contained.
We tracked this at the same client over a 12-month period after the pipeline rebuild. Incidents that previously required 4-6 hours of investigation dropped to under 45 minutes to diagnose and remediate, largely because the audit trail made root cause analysis deterministic rather than investigative. For the detection and hunting capabilities that complement a mature DevSecOps program — particularly around telemetry requirements — the SOC readiness audit walkthrough covers what your SIEM actually needs to make this work.
Start Here, in This Order
If you’re starting from zero, sequence matters more than comprehensiveness. Here’s the order we’d recommend:
- Add PSScriptAnalyzer and InjectionHunter to your pipeline as blocking gates. Errors fail the build.
- Audit every script in production for hardcoded credentials this week. Move them to a secrets manager before anything else.
- Enable PowerShell script block logging and module logging across all managed endpoints via Group Policy and forward to your SIEM.
- Define JEA role capability files for every service account executing automated scripts against privileged systems.
- Build a security-focused code review checklist and make it a required gate for scripts touching credentials or external data sources.
That’s a six-week roadmap for most teams, not a multi-quarter initiative. The tools are free, the configuration is documented, and the patterns are consistent across environments. The gap is almost always prioritization, not capability.
If you want a structured assessment of where your current pipeline stands — or help implementing these controls across a managed environment — reach out to our team. We’ve run this engagement across financial services, healthcare, and professional services sectors. The entry points differ. The gaps don’t.


