The Script That Hammered a SQL Box at 3AM
We had a client whose nightly reconciliation script was melting a SQL server every night around 3AM. The script pulled roughly 40,000 objects from an API and shoved each one down the pipeline into a database writer. One row at a time. One connection round-trip at a time.
Their ops lead blamed the database. I blamed the pipeline. PowerShell OutBuffer fixed it in four lines. This audit walks through how pipeline buffering actually works, when it helps, and when it quietly does nothing. If you’ve ever stared at a slow pipeline wondering why batching isn’t kicking in, this is for you.
We’ll use Jeff Hicks’s Get-Foo / Trace-Foo / Export-Foo demo pattern because it’s the clearest way to see what the engine is doing. Each checkpoint has a pass/fail criterion. Run them on a test box before you trust any of this in production.
Checkpoint 1: Default Pipeline Behavior
By default, PowerShell streams. Each object produced in a Process block is handed to the next cmdlet the moment it’s created. Nothing waits. Nothing batches.
Run this and watch the order of operations:
Get-Foo -Max 10 | Trace-Foo
Pass: Trace-Foo reports processing one item at a time, interleaved with Get-Foo output.
Fail: You see batches. If that happens, something upstream already set $PSDefaultParameterValues for OutBuffer. Nuke that first.
Checkpoint 2: Add OutBuffer and Count the Batch Size
Here’s the gotcha that trips people up. When you set -OutBuffer 4, PowerShell doesn’t send groups of 4. It sends groups of OutBuffer + 1. So 4 means batches of 5. Don’t ask me why — it’s just how the engine was built.
Get-Foo -Max 10 -OutBuffer 4 | Trace-Foo
Pass: Get-Foo emits 5 objects before Trace-Foo starts processing them. Then the next 5.
Fail: You see groups of 4 or singletons. Check you’re actually piping, not passing -InputObject as a parameter (see Checkpoint 3).
Microsoft’s about_CommonParameters documentation covers this, but it’s buried.
Checkpoint 3: The Parameter vs Pipeline Trap
This one bit me on a client engagement two years back. The junior admin had written Trace-Foo -InputObject (Get-Foo -OutBuffer 2 -Max 10) thinking it would batch. It doesn’t. Parentheses execute Get-Foo completely before Trace-Foo ever starts. There is no pipeline. OutBuffer does nothing.
Pass: OutBuffer only affects real pipeline chains with |.
Fail: Subexpressions, variable assignments, or -InputObject parameters — buffering silently ignored.
Rule of thumb: if there’s no pipe character between the two commands, OutBuffer isn’t in play. Worth drilling into your team the same way you drill Select-Object fundamentals.
Checkpoint 4: Chaining Buffers Across Multiple Cmdlets
You can set OutBuffer on every stage of a pipeline. Each stage buffers independently.
Get-Foo -OutBuffer 4 -Max 20 | Trace-Foo -OutBuffer 9 | Export-Foo
What you’ll see in the output: Get-Foo emits 5 objects, Trace-Foo processes them one at a time internally but holds its output. Only when Trace-Foo has 10 objects queued does Export-Foo suddenly fire ten times in quick succession.
Pass: Export-Foo output comes in visible bursts of 10.
Fail: Export-Foo fires immediately per object — your OutBuffer on Trace-Foo isn’t being applied.
Checkpoint 5: When Buffering Actually Helps
Here’s my opinionated take: OutBuffer is useless 95% of the time. For in-memory object filtering, it adds overhead for zero benefit. The other 5% is where it earns its keep.
Situations where I’ve seen it pay off on client systems:
– Database writes where each object triggers a connection or transaction
– REST API calls where you’re rate-limited and want to batch before a ForEach-Object that builds a bulk payload
– Network transfers where packet coalescing matters
– Archive jobs touching managed email retention systems that prefer batched commits
Caveat: OutBuffer doesn’t force the consuming cmdlet to process in batches. It just controls when objects are released. The next cmdlet still processes them one at a time in its Process block. If you want true batched processing, you need internal buffering with a generic list in your function.
Checkpoint 6: Internal Buffering With a Generic List
This is what I reach for when OutBuffer isn’t enough. Build a [System.Collections.Generic.List[object]] in Begin, append in Process, flush in chunks, and clear any remainder in End.
Begin { $buffer = [System.Collections.Generic.List[object]]::new() }Process { $buffer.Add($InputObject); if ($buffer.Count -ge 100) { Flush-Batch $buffer; $buffer.Clear() } }End { if ($buffer.Count) { Flush-Batch $buffer } }
Pass: Your End block processes any leftover objects. Forgetting this is the #1 mistake.
Fail: You lose the last partial batch. I’ve watched a client’s script drop 47 records this way before we caught it in QA. Pair this with a solid backup service so you can recover when someone ships the buggy version.
Checkpoint 7: Measure, Don’t Guess
Before you sprinkle OutBuffer everywhere, measure. Use Measure-Command with and without buffering on a representative dataset. If the delta is under 5%, walk away — you’re not fixing anything, you’re just adding mystery to the script.
Pass: Measurable improvement greater than 10% on realistic load.
Fail: Margin of error. Remove the buffer. Keep the script readable.
For deeper pipeline tracing, pair this with tools covered in Sysinternals boot tracing — they’re great for catching where your script is actually waiting.
Audit Summary
Seven checkpoints. Most scripts pass Checkpoints 1 through 3 by accident and never need 4 through 7. If you’re writing pipeline-heavy functions that talk to databases, archives, or rate-limited APIs, OutBuffer is a small tool worth knowing. For everything else, leave it alone.
The takeaway: before you touch OutBuffer, prove with Measure-Command that your pipeline is actually the bottleneck. Then decide whether engine-level OutBuffer or internal list buffering fits your case. If you’d rather have our team audit your automation scripts instead of guessing, reach out here.

