The Packet Loss Nobody Could Find
The call came in on a Tuesday morning. A financial services firm we support had been logging intermittent packet loss across their Hyper-V cluster for three weeks. Their previous MSP had run Wireshark on a physical uplink port, declared the network clean, and closed the ticket. By the time they brought us in, their ops team had blamed the storage array, the NIC drivers, and the virtualization engineers — in rotation — without a single useful data point. Nobody had opened the NetEventPacketCapture PowerShell module. Nobody had run New-NetEventSession. That gap cost them twenty-two days of degraded performance and three missed SLAs.
This is the post-mortem: what we found, how we found it, and why New-NetEventSession belongs in every Windows infrastructure team’s standard runbook before the 2 AM call arrives.
What the NetEventPacketCapture Module Actually Does
Windows Server 2025 ships the NetEventPacketCapture module as a built-in tool for structured network event capture using ETW — Event Tracing for Windows. Think of it as Wireshark for the inside of the Windows networking stack, with visibility into VM-to-VM traffic, Virtual Filtering Platform (VFP) decisions, and WFP kernel-layer events that no external capture tool can reach.
A session created with New-NetEventSession defines the container: where events go, how large the log files can grow, and how trace buffers are managed. The session itself captures nothing until you attach providers. Providers are the components that actually watch specific adapters, protocol stacks, or virtual switch layers. This two-step architecture — session first, providers second — is what gives the framework its flexibility. It also means a half-configured session sitting without providers is useless, which is a failure mode I have seen more than once in client environments.
Session Lifecycle: Create, Configure, Run, Remove
The workflow is linear. Build the session, attach providers, start capture, stop when done, clean up. Skipping steps in either direction causes problems.
Step 1 — Create the Session
New-NetEventSession -Name "CapSession01" `
-LocalFilePath "C:\Captures\session01.etl" `
-MaxFileSize 256 `
-CaptureMode SaveToFile
The -MaxFileSize value is in megabytes. On a busy Hyper-V host, 256MB fills faster than you expect during a load test. Set it too low and the capture rolls over before you catch the anomaly. Set it without a limit and you end up with a file that takes an hour to open. I have seen both outcomes in production engagements. The -CaptureMode parameter accepts SaveToFile for post-incident analysis or RealtimeRPC when you need live event streaming to a connected console. Use SaveToFile by default unless you are actively watching a live issue unfold.
Step 2 — Attach Providers
# Add the Windows TCP/IP stack provider
Add-NetEventProvider -Name "Microsoft-Windows-TCPIP" -SessionName "CapSession01"
# Add raw packet capture
Add-NetEventPacketCaptureProvider -SessionName "CapSession01"
The Microsoft-Windows-TCPIP provider surfaces stack-level telemetry: connection resets, retransmit counts, port conflicts. The packet capture provider gives you the raw frames. In the financial services engagement, the team had historically only used raw packet capture — they were getting frames but none of the TCP stack events that would have shown them the retransmit storm underneath. Getting both providers attached together is the difference between seeing symptoms and seeing cause.
Step 3 — Filter for Hyper-V and VFP Traffic
Capturing everything on a busy Hyper-V host is like recording every conversation in an office building because one person is making suspicious calls. Scope it down.
# Filter to a specific physical adapter
Add-NetEventNetworkAdapter -Name "Ethernet 2" -SessionName "CapSession01"
# Attach a VFP provider for Hyper-V virtual switch traffic
Add-NetEventVFPProvider -SessionName "CapSession01"
# Scope VFP capture to a specific source IP and TCP only (protocol 6)
Set-NetEventVFPProvider -SessionName "CapSession01" `
-SourceIPAddresses "10.10.0.15" `
-IPProtocols 6
That last block is exactly what we ran in the financial services environment. Protocol 6 is TCP. When you are chasing a TCP retransmit problem on a specific VM, you do not need UDP broadcast traffic from every adapter on the host polluting your capture file. The VFP layer sits inside the Hyper-V virtual switch — no physical switch span port will ever show you what happens there.
Timeline: What We Found When We Actually Looked
Days 1–21 (before our engagement): Previous MSP captured on physical uplink ports. Saw nothing unusual. Closed the ticket twice.
Day 1 of our engagement, morning: First thing we did was run Get-NetEventSession to check for any existing capture sessions.
Get-NetEventSession
No output. The environment had never used structured NetEvent capture. Their entire network troubleshooting history in this Hyper-V environment was external tools and guesswork.
Day 1, afternoon: Built a VFP-scoped session targeting the two VMs involved in the reported packet loss. Ran a 40-minute synthetic load test.
Start-NetEventSession -Name "CapSession01"
# ... load test running ...
Stop-NetEventSession -Name "CapSession01"
Day 2, morning: Analyzed the .etl file. Found a TCP retransmit rate of 14% on connections between two specific VMs on the same Hyper-V host. The VMs shared a virtual switch — traffic never left the physical host. No external port monitor would have caught it in a decade of monitoring.
Root Cause: A VFP Rule Left Over from Migration
Six weeks before the incident reports started, the client’s team migrated several VMs between Hyper-V hosts. During that migration, a Virtual Filtering Platform rule was attached to the wrong VLAN segment. VM-to-VM traffic on that segment was being processed through an unnecessary NAT rule, adding latency and causing intermittent drops on retransmit. The physical network was completely innocent the entire time.
Here is my position, and I will stand behind it: infrastructure teams that do not know the NetEventPacketCapture module are flying blind on Hyper-V networks. Wireshark is a valuable tool. It does not see inside a virtual switch. The gap between what external capture tools show you and what is actually happening inside the Windows networking stack is exactly where incidents like this survive for weeks undetected. For teams running complex Windows VPS environments, this is not a theoretical risk — it is a regular source of misdiagnosed tickets.
Modifying and Cleaning Up Sessions
Sessions do not auto-delete. Stale sessions with attached providers consume ETW buffer resources without your knowledge. Make cleanup part of the capture workflow, not an afterthought.
# Verify configuration before removing
Get-NetEventSession -Name "CapSession01"
# Adjust parameters on an existing session without rebuilding it
Set-NetEventSession -Name "CapSession01" -MaxFileSize 512
# Remove session and all associated providers
Remove-NetEventSession -Name "CapSession01"
Always run Get-NetEventSession to confirm which session you are about to touch. In multi-session capture environments — where you might have one session watching VFP and another watching the TCPIP stack simultaneously — removing the wrong one loses data you cannot recover. If you want to detach individual components without tearing down the whole session, use Remove-NetEventProvider, Remove-NetEventPacketCaptureProvider, or Remove-NetEventVFPProvider selectively.
Running Captures Remotely with CimSession
Most cmdlets in this module accept a -CimSession parameter, which lets you target a remote Windows Server 2025 host without opening an interactive RDP session. On a production Hyper-V host, minimizing interactive sessions during an incident matters.
$cim = New-CimSession -ComputerName "HV-PROD-01"
New-NetEventSession -Name "RemoteCapture" `
-CimSession $cim `
-LocalFilePath "D:\Captures\remote.etl" `
-MaxFileSize 512
Add-NetEventPacketCaptureProvider -SessionName "RemoteCapture" -CimSession $cim
Add-NetEventVFPProvider -SessionName "RemoteCapture" -CimSession $cim
Start-NetEventSession -Name "RemoteCapture" -CimSession $cim
For captures that need to run for extended periods, pair this with the -AsJob parameter to avoid blocking your console session — our guide on PowerShell -AsJob for long-running network tasks covers that workflow in detail. When storing capture output from remote hosts, avoid landing large .etl files on the C: drive of a production server. We route client captures to dedicated remote file storage to keep production disks clean and captures accessible for analysis.
For teams also hardening network security at the IPsec layer, the IPsec Main Mode Crypto Sets PowerShell guide pairs well with network event monitoring — the two tools together give you both visibility and enforcement at the Windows networking layer.
One Caveat Worth Stating Plainly
The NetEventPacketCapture module captures at the Windows networking stack. It does not capture traffic that bypasses the stack entirely. If your VMs use SR-IOV — Single Root I/O Virtualization — where a NIC passes traffic directly to a VM without touching the hypervisor network layer, this module will not see that traffic. Do not assume full coverage without verifying your NIC and virtual switch configuration. We have had one engagement where a client ran a VFP capture, saw no anomalies, and concluded the network was healthy — they were using SR-IOV on that specific adapter segment. That conclusion was wrong for a different reason than they assumed.
Also account for I/O load. Running an unscoped full packet capture on a heavily utilized Hyper-V host during peak hours can impact VM performance. Use adapter filters, VFP source IP filters, and protocol filters before you press start. The official PowerShell documentation covers all available filter parameters in depth.
Five Things We Changed After This Engagement
Every post-mortem should produce specific process changes, not vague intentions.
- Document virtual switch topology before migrating VMs. The misconfigured VFP rule came from an undocumented migration where nobody wrote down which VLAN segments were in use.
- Verify with NetEventSession, not only external tools. Any Hyper-V network complaint that does not reproduce on a physical port monitor starts with a VFP-scoped session now.
- Set MaxFileSize before starting. No captures run in our client environments without an explicit file size cap.
- Scope VFP provider filters on busy hosts. Unfiltered captures on production hypervisors generate noise. Noise delays diagnosis.
- Run Get-NetEventSession quarterly. Stale sessions accumulate. We added this check to our standard quarterly review checklist.
Run This Right Now
Open a PowerShell session on one of your Hyper-V hosts and run Get-NetEventSession. If you get no output, your environment has never used structured network event capture. That does not mean the environment is healthy — it means you have never verified it at the layer that matters. Build a capture runbook using New-NetEventSession, attach your TCPIP and VFP providers, scope it to the adapters you care about, and know exactly how to start and stop it before you need it during an incident.
If your team is chasing an intermittent network issue that external tools have not explained, or if you need help building a structured capture and analysis process for a Windows Server 2025 environment, reach out to the team at SSE. We have seen too many incidents that were already solved the moment someone ran the right PowerShell command.


