When Your Automation Script Takes Down the Network
Three hundred endpoints. One script. A network switch gasping under the load. We inherited exactly this situation from a previous vendor during an engagement with a mid-market logistics company — their “automated” inventory script was hammering every device simultaneously, and every Monday morning at 7 AM, the operations team filed a P1 incident. The root cause was not the script logic. It was the absence of a single parameter: -ThrottleLimit.
This is a common failure mode in environments that have grown faster than their automation practices. Someone writes a script that works fine against ten machines in a test environment, and then operations deploys it against three hundred production endpoints without adjusting the concurrency model. The network does not forgive that kind of oversight.
What ThrottleLimit Actually Does
The -ThrottleLimit parameter controls how many concurrent operations a cmdlet can establish at one time. It is an Int32 value, named and optional. When you omit it — or explicitly pass a value of 0 — PowerShell calculates what it considers an optimum limit based on how many CIM cmdlets are currently running on the system. That auto-calculation is smarter than nothing, but it does not account for your network topology, your switch capacity, or what else is happening on the wire at 7 AM on Monday.
Critically, the throttle limit is scoped to the specific cmdlet invocation. It does not apply to the session, and it does not apply to the computer. If you run two separate Get-CimInstance calls in the same script without explicit throttling, each one operates independently. Setting it once does not protect you downstream.
Where It Applies
ThrottleLimit is available across a wide range of PowerShell cmdlets in Windows Server 2025 — anywhere CIM operations are involved. You will find it on networking cmdlets, DHCP management cmdlets, routing domain operations, and classic WMI wrappers like Get-WmiObject. The behavior is consistent across all of them: cap the concurrent operations, protect the infrastructure underneath.
The Logistics Client: A Cautionary Walkthrough
After inheriting the logistics company’s environment, we started by auditing what the previous vendor’s scripts were actually doing. The inventory collection script used Get-WmiObject Win32_OperatingSystem against a flat list of computer names pulled from a text file. No throttle. No job management. Just fire and forget against all 300 endpoints simultaneously.
The immediate fix was straightforward. We introduced -ThrottleLimit and moved the operation to a background job using -AsJob, which gives you non-blocking execution while the throttle governs the concurrency. Here is the revised pattern we deployed:
# Pull the target list from a managed file
$computers = Get-Content C:\Scripts\managed_endpoints.txt
# Run WMI query with controlled concurrency as a background job
$job = Get-WmiObject Win32_OperatingSystem `
-ComputerName $computers `
-ThrottleLimit 25 `
-AsJob
# Wait for completion and collect results
$job | Wait-Job | Receive-Job
Setting -ThrottleLimit 25 meant no more than 25 simultaneous WMI connections were opened at any time. The full inventory still ran against all 300 machines, but it did so in controlled batches. Monday morning P1 incidents dropped to zero. The network team stopped asking questions about the 7 AM spike.
We cover the background job pattern in more depth in our article on PowerShell -AsJob for long-running network tasks — if you are running large-scale operations and not combining -AsJob with throttle control, you are leaving operational resilience on the table.
Choosing the Right ThrottleLimit Value
There is no universal magic number. The right value depends on three things: the number of target endpoints, the available network bandwidth between your management host and those endpoints, and what else is competing for that bandwidth during the execution window.
Our general starting points by environment size:
- Small (<50 endpoints): ThrottleLimit 10–20. You probably do not need to throttle at all, but building the habit matters for when the environment grows.
- Medium (50–200 endpoints): ThrottleLimit 20–50. Monitor switch utilization during the first few runs and adjust down if you see congestion.
- Large (200+ endpoints): ThrottleLimit 25–50. Test thoroughly. At this scale, a misconfigured value can saturate uplinks and impact production traffic.
Start lower than you think you need and tune upward, not downward. A script that takes 12 minutes instead of 8 because the throttle was conservative does not generate a P1 incident. A script that takes 6 minutes and degrades production traffic does. Boring and predictable is the goal.
Compliance Scanning at Scale
During a compliance audit engagement for a healthcare client — they were working toward alignment with the CIS Benchmarks for their Windows Server estate — we needed to collect configuration data from roughly 180 servers across four subnets. The audit window was narrow and the servers were production systems under active load.
We built the collection script using Get-CimInstance rather than the older WMI cmdlets, which gave us more reliable connection handling across Windows Server 2016 through 2025 nodes. ThrottleLimit governed the concurrency across all segments:
# Collect CIM data with explicit throttle control
$servers = Get-Content C:\Audit\server_list.txt
Get-CimInstance -ClassName Win32_ComputerSystem `
-ComputerName $servers `
-ThrottleLimit 30 |
Select-Object Name, Manufacturer, Model, TotalPhysicalMemory |
Export-Csv C:\Audit\hardware_inventory.csv -NoTypeInformation
We ran this during the agreed maintenance window — a proper change record, approved through CAB, executed by the on-call engineer. The throttle ensured the network impact was predictable and contained. When the audit team asked for a second pass after adjusting the filter criteria, we already had the runbook documented. Re-execution was a 10-minute process, not a 2-hour investigation.
If you are doing network-layer data collection in parallel with these audits, the New-NetEventSession cmdlet is worth understanding — it lets you capture network events at the session level without the overhead of full packet capture tools.
A Caveat on Auto-Calculation
When you omit -ThrottleLimit entirely, or pass 0, PowerShell’s auto-calculation kicks in. It factors in how many CIM cmdlets are already running on the local machine and tries to pick a reasonable ceiling. This is not a bad default for ad-hoc interactive work at a management console. It is an unreliable default for production automation.
The auto-calculation does not know about your network. It does not know you are running this against a segment with a shared 100Mbps uplink. It does not know your change window ends in 45 minutes. For anything that runs on a schedule or gets included in a runbook, set the value explicitly. Document why you chose that value. Revisit it when the environment changes.
This is the same principle we apply to SLA targets and escalation paths: implicit assumptions fail under pressure. Explicit, documented parameters do not.
ThrottleLimit in Intune-Managed Environments
When endpoints under Microsoft Intune management are involved, the interaction between Intune policies and direct PowerShell operations deserves careful scoping. Intune-managed devices may have policy restrictions or connectivity constraints that affect how concurrent CIM connections behave, particularly around WMI namespace access.
We have seen engagements where a client’s Intune configuration was blocking certain WMI namespaces on compliant devices. The script would fail silently for a subset of endpoints — no error, just missing data — because throttled connections were being dropped rather than rejected with a useful error code. Explicit error handling is not optional at scale. Always test against a representative sample of managed devices before running against the full population.
What a Production-Ready Script Looks Like
A well-structured script using ThrottleLimit for production operations includes the rationale for the chosen value and full logging. Here is the pattern we use in managed environments:
# Production endpoint query with controlled concurrency
# ThrottleLimit: 25 — set for 200-node environment on 1Gbps fabric
# Approved: CHG-20240318 | Change Window: 02:00-04:00
$ErrorActionPreference = 'Continue'
$computers = Get-Content C:\Scripts\prod_endpoints.txt
$logPath = "C:\Logs\wmi_collect_$(Get-Date -Format 'yyyyMMdd_HHmm').log"
$results = Get-WmiObject Win32_OperatingSystem `
-ComputerName $computers `
-ThrottleLimit 25 `
-AsJob |
Wait-Job |
Receive-Job
$results | Export-Csv C:\Reports\os_inventory.csv -NoTypeInformation
"Completed: $($results.Count) results collected" | Out-File $logPath
Change number in the comment. ThrottleLimit value documented with the rationale. Logging to a named file with a timestamp. Error action set explicitly. This is the version you hand off to the on-call team without a 30-minute briefing.
The Practical Takeaway
ThrottleLimit is a small parameter with outsized impact in large Windows environments. It appears on nearly every CIM and WMI cmdlet in the Windows Server 2025 PowerShell module library, which means there is no excuse for leaving it unconfigured in production scripts. The default auto-calculation is adequate for interactive, ad-hoc work. It is not a substitute for deliberate capacity planning in automated workflows.
Set it explicitly. Document your reasoning in the script header. Build it into your change record. Revisit it when the environment scales or the network architecture changes. That is the operational discipline that separates a script that works in production from one that works in testing and generates incidents on Monday morning.
If you are building out automated management workflows for your Windows environment and want a second set of eyes on your concurrency model or runbook structure, reach out to the SSE team. We have done this work across environments of all sizes and we know where the failure modes hide.


