Three months into a major storage infrastructure refresh for a financial services client, we hit an unexpected bottleneck. Their SQL Server cluster was saturating CPU during peak transaction windows — not because the processors lacked capacity, but because every high-throughput storage transfer was routing through the main processor. The servers had headroom; the network stack did not. That engagement gave me a firsthand view of why PowerShell NetAdapter RDMA management belongs in every Windows Server 2025 deployment playbook, and why treating RDMA as an optional or advanced feature is a decision that costs real performance in real production environments.
What RDMA Actually Does for Your Infrastructure
Remote Direct Memory Access — RDMA — enables network adapters to transfer data directly between memory regions on separate machines without involving the main processor on either end. The processor initiates and terminates the connection, but the actual data movement happens independently of CPU cycles. For any workload that moves large volumes of data across the network repeatedly — SQL Server, SMB Direct, Hyper-V live migration, Storage Spaces Direct — this distinction translates directly into lower latency and meaningfully reduced CPU overhead during peak load periods.
This is not exclusively a high-performance computing feature. In Windows Server 2025, RDMA is the underlying transport for SMB Direct, which is the high-performance path for file server and storage workloads. If you have Storage Spaces Direct deployed — or if you are using Scale-Out File Server as a backend for Hyper-V — then RDMA is already part of the intended architecture. The question is whether it is actually enabled and configured correctly on your adapters.
The CPU Bypass — Why It Matters at Scale
A useful way to think about the difference between traditional Ethernet and RDMA: conventional networking routes all data through the OS network stack, meaning every byte transferred passes through kernel buffers and processor cycles. RDMA creates a dedicated channel — the adapter handles the memory-to-memory transfer independently, and the OS stack is bypassed for the data payload. The processor overhead that remains covers only the signaling to set up and tear down connections, not the data itself.
At 25GbE and above, this distinction becomes measurable under load. A 100GbE adapter handling storage traffic for a hyperconverged cluster can generate enough throughput to meaningfully tax the CPU on a traditional networking stack. The same adapter with RDMA enabled routes that traffic directly, preserving processor cycles for the compute workloads the server is actually meant to run. This is one of the core architectural reasons modern hyperconverged infrastructure designs specify RDMA-capable adapters as a requirement rather than an optional upgrade.
Auditing Your Current RDMA State with Get-NetAdapterRdma
Before enabling or modifying anything, the correct first step is to understand what state you are currently in. The Get-NetAdapterRdma cmdlet retrieves the RDMA properties for network adapters on the local system. In environments with multiple physical NICs — a typical configuration in storage-heavy or hyperconverged deployments — this gives you an immediate view of which adapters are RDMA-capable and which currently have RDMA enabled.
# Retrieve RDMA properties for all network adapters on the local system
# Output is a CimInstance object containing RDMA capability and state information
Get-NetAdapterRdma
The output object is a Microsoft.Management.Infrastructure.CimInstance — a WMI wrapper that the PowerShell management infrastructure uses to represent adapter state. The key fields to examine are the adapter name, interface description, whether RDMA is currently enabled, and whether the adapter is RDMA-capable at all. An adapter appearing in the output does not necessarily mean RDMA is active — it means the system detected an RDMA-capable device. To filter specifically for adapters where RDMA is enabled:
# List only adapters where RDMA is currently enabled
Get-NetAdapterRdma | Where-Object { $_.Enabled -eq $true }
Remote Fleet Auditing with CimSession
In multi-node environments — Hyper-V clusters, hyperconverged stacks, scale-out file servers — querying RDMA state across all nodes simultaneously is far more practical than logging into each server individually. The -CimSession parameter makes this possible without leaving a single console:
# Establish CIM sessions to multiple servers for parallel querying
# Replace hostnames with your actual server names or management IPs
$sessions = New-CimSession -ComputerName "WS2025-HV01", "WS2025-HV02", "WS2025-HV03", "WS2025-HV04"
# Query RDMA state across all nodes in a single command and display as a table
Get-NetAdapterRdma -CimSession $sessions | Select-Object PSComputerName, Name, Enabled | Format-Table -AutoSize
During a quarterly infrastructure review for a client running a four-node Hyper-V cluster on Windows Server 2025, we used exactly this pattern and found that two adapters on one node had RDMA disabled — a silent state reset triggered by a NIC driver update applied two weeks earlier. The impact was asymmetric live migration performance across the cluster: migrations from the affected node took three to four times longer than migrations from the healthy nodes. A Get-NetAdapterRdma sweep across all nodes surfaced the discrepancy in under a minute. Without that targeted check, the troubleshooting path would have been significantly longer and almost certainly would have consumed an entire maintenance window chasing symptoms rather than the root cause.
This kind of configuration drift is among the more insidious problems in managed infrastructure — individual components look healthy in standard monitoring, but a specific feature state has silently changed. Including RDMA state verification in your post-update validation checklist is inexpensive insurance against this class of issue. It is the same principle that informed the post-mortem findings in our Windows Group Policy Incident case study — a configuration state that looks correct in isolation can be quietly wrong in a way that only surfaces under specific load conditions or after a change that should have been unrelated.
Step 1 — Enabling RDMA with Enable-NetAdapterRdma
The Enable-NetAdapterRdma cmdlet is the primary tool for activating RDMA on capable adapters. It supports two main parameter sets: ByName, which takes an adapter name string, and ByInstanceID, which takes an interface description string. The ByName approach covers the majority of production scenarios.
# Enable RDMA on a specific adapter by name
# Note: this will restart the adapter by default, causing a brief traffic interruption
Enable-NetAdapterRdma -Name "Storage-NIC-01"
# Enable RDMA on all RDMA-capable adapters at once using a wildcard
# Use this carefully in production — every matched adapter will restart unless -NoRestart is specified
Enable-NetAdapterRdma -Name "*"
The wildcard form is genuinely useful during initial cluster commissioning when you need to enable RDMA across multiple storage adapters consistently. In an established production environment, be more surgical — target specific adapter names to avoid unintended restarts on adapters you did not intend to modify.
Using -NoRestart and -PassThru in Production
By default, enabling RDMA triggers an adapter restart, which interrupts traffic on that interface for a brief period. In production environments, you have two options: schedule the change during a maintenance window, or use -NoRestart to stage the change and apply it during a planned restart or reboot.
# Enable RDMA without immediately restarting the adapter
# The change takes effect on next restart or when the adapter is manually restarted
# -PassThru returns the modified CimInstance for verification or change-log capture
Enable-NetAdapterRdma -Name "Storage-NIC-01" -NoRestart -PassThru
Adding -PassThru to any NetAdapter cmdlet that modifies state is a habit worth building into your scripting practice. The returned CimInstance object lets you pipe the result into additional processing — writing state to a log file, sending it to a monitoring system, or displaying it immediately as part of a change management workflow. When you are making multiple adapter changes in sequence, capturing the output at each step gives you a clear audit trail without a separate verification pass.
Validating Changes Before You Commit with -WhatIf
Every cmdlet in the NetAdapter module that modifies adapter state supports the -WhatIf common parameter. In production environments, this is mandatory before any command that uses a wildcard or a name pattern that might match more adapters than intended.
# Preview which adapters would be affected by the command — without making any changes
Enable-NetAdapterRdma -Name "*" -WhatIf
I have seen teams inadvertently include management NICs in a wildcard operation because the adapter naming convention was not what they expected. A -WhatIf run takes two seconds and prevents the kind of incident that requires an emergency console session to recover from. Think of it as reviewing the architecture diagram before touching a live system — the cost of the check is negligible; the cost of skipping it occasionally is not.
Step 2 — Programmatic State Control with Set-NetAdapterRdma
While Enable-NetAdapterRdma and Disable-NetAdapterRdma are the preferred cmdlets for interactive, targeted operations, Set-NetAdapterRdma offers a different value proposition: it expresses RDMA state through a boolean -Enabled parameter — $true or $false — which makes it the more suitable choice when building configuration scripts or automation pipelines that need to enforce a desired state without branching logic.
# Explicitly set RDMA state on a named adapter using the boolean parameter
# Useful in automation and desired-state scripts — single command handles both enable and disable
Set-NetAdapterRdma -Name "MyAdapter" -Enabled $true
# Disable RDMA on the same adapter using the same cmdlet pattern
Set-NetAdapterRdma -Name "MyAdapter" -Enabled $false
The Microsoft documentation is explicit that Enable-NetAdapterRdma is the preferred cmdlet for interactive use, and Set-NetAdapterRdma with -Enabled $true is the documented alternative. In practice, for automation use cases — particularly when enforcing configuration state through scripts that run during deployment, post-update validation, or periodic compliance checks — Set-NetAdapterRdma gives you a single parameterized statement that expresses intent clearly without requiring conditional logic to decide which of two separate cmdlets to call.
When integrating with configuration management platforms like Ansible on Windows nodes, the idempotent pattern of Set-NetAdapterRdma maps naturally to task definitions that can run repeatedly without side effects. The same applies to PowerShell DSC configurations where you want declarative expression of desired state. For teams building infrastructure-as-code practices into their Windows Server management, the Set pattern fits that model more cleanly than the paired enable/disable cmdlet approach.
Step 3 — When and How to Disable RDMA
The Disable-NetAdapterRdma cmdlet follows the same parameter structure as the enable cmdlet — ByName, ByInstanceID, or InputObject — and supports the same -NoRestart, -PassThru, and -WhatIf parameters. Use cases for disabling RDMA are narrower but real: troubleshooting unexpected performance degradation, isolating an RDMA-related driver issue, or temporarily removing RDMA from a specific adapter during a maintenance procedure that requires a known baseline state.
# Disable RDMA on a specific adapter by name
Disable-NetAdapterRdma -Name "Storage-NIC-01"
# For troubleshooting — disable RDMA on all adapters to isolate it as a variable
# Always preview with -WhatIf before executing the actual command
Disable-NetAdapterRdma -Name "*" -WhatIf # Preview first
Disable-NetAdapterRdma -Name "*" # Then execute if scope looks correct
A scenario where this matters in practice: if you suspect RDMA is contributing to packet loss or unexpected behavior in a converged network environment, temporarily disabling it and measuring the impact is a clean way to isolate the variable. If performance characteristics change significantly after disabling RDMA, you have actionable diagnostic information pointing toward DCB settings, switch configuration, or driver version as the next investigation layer — rather than chasing symptoms without a clear hypothesis.
The Supporting Configuration — QoS, RSS, and SR-IOV
RDMA does not operate in isolation. The NetAdapter module in Windows Server 2025 exposes a full set of complementary cmdlets that manage the configuration RDMA depends on to perform correctly. Understanding these relationships is what separates an RDMA deployment that delivers on its potential from one that technically has RDMA enabled but produces disappointing results in practice.
DCB and Priority Flow Control for RoCE Deployments
RDMA over Converged Ethernet (RoCE) requires lossless Ethernet to function correctly. Packet loss in a RoCE environment causes retransmissions at the RDMA layer — directly undermining the latency benefits RDMA was implemented to provide. The mechanism for achieving lossless Ethernet in a data center context is Data Center Bridging (DCB) with Priority Flow Control (PFC), configured on both the server adapters and the network switches.
# Check current DCB / QoS state on all network adapters
Get-NetAdapterQos
# Enable DCB on a specific adapter before enabling RoCE-based RDMA
Enable-NetAdapterQos -Name "Storage-NIC-01"
# Verify QoS state after enabling — output CimInstance shows capabilities and current configuration
Get-NetAdapterQos -Name "Storage-NIC-01"
A common mistake during initial deployment is enabling RDMA on the server side while leaving the top-of-rack switch in its default configuration. The result is an RDMA-enabled adapter operating over lossy Ethernet — which can actually perform worse than standard TCP under high-contention conditions because RoCE retransmissions introduce their own overhead. The server-side PowerShell configuration is only half of the equation; the switch fabric must be configured to match.
iWARP, the other widely deployed RDMA transport, is more tolerant of lossy Ethernet but typically carries higher CPU overhead than RoCE under sustained load. The choice between RoCE and iWARP is made at the hardware selection and architecture phase, before any PowerShell configuration applies. If your adapters support RoCE and you want the full benefit of RDMA, the QoS and switch configuration work is not negotiable.
RSS and SR-IOV in Hyper-V Environments
Receive Side Scaling (RSS) distributes incoming network traffic across multiple processor cores — it is complementary to RDMA rather than redundant with it. On adapters handling both RDMA storage traffic and general-purpose network traffic, RSS ensures the non-RDMA portion of the workload does not become a single-core processing bottleneck. The Get-NetAdapterRss and Set-NetAdapterRss cmdlets manage this configuration alongside your RDMA settings.
# Review RSS configuration on storage adapters
Get-NetAdapterRss -Name "Storage-NIC-01"
# Check SR-IOV state — relevant for Hyper-V environments with VMs requiring direct adapter access
Get-NetAdapterSriov
# Gather adapter statistics as a documented baseline before and after configuration changes
Get-NetAdapterStatistics -Name "Storage-NIC-01" | Select-Object ReceivedBytes, SentBytes, ReceivedDiscardedPackets
In Hyper-V environments, SR-IOV (Single Root I/O Virtualization) allows virtual machines to directly access physical adapter hardware — which pairs well with RDMA for guest workloads requiring high-throughput networking. The Set-NetAdapterSriov cmdlet controls the number of virtual functions and queue pair allocations for default and non-default VPorts. Getting the interaction between SR-IOV, RDMA, and the Hyper-V virtual switch configured correctly is a multi-layer exercise — but the NetAdapter module gives you the inspection and configuration primitives to work through it methodically.
For teams working through the interaction between network performance architecture and security controls, our Zero Trust Architecture deployment walkthrough covers how network segmentation decisions apply to high-performance network paths — a consideration directly relevant to RDMA storage fabrics in multi-tenant or compliance-sensitive environments. When we conduct IT infrastructure assessments for clients running hyperconverged environments, the RDMA and SR-IOV configuration review is consistently one of the areas where we find the most discrepancy between intended and actual state.
My Position — RDMA Is Being Systematically Underutilized
Here is an assessment I will stand behind: the majority of Windows Server 2025 environments with RDMA-capable hardware are not using RDMA, because it was not on the commissioning checklist during initial deployment. The adapters are physically capable; the PowerShell tooling is directly available through the NetAdapter module; but no one verified RDMA state at deployment, and no one added it to the post-update validation process. The feature sits dormant on hardware that was purchased specifically for its RDMA capabilities.
Modern 25GbE and 100GbE adapters from NVIDIA (Mellanox ConnectX series) and Intel are frequently specified for hyperconverged infrastructure precisely because they support SMB Direct and Storage Spaces Direct over RDMA. If those adapters are deployed in Windows Server 2025 and Get-NetAdapterRdma shows RDMA disabled — or the state was never verified — the organization is not getting the performance they paid for in their hardware budget.
The counterargument I hear is that RDMA adds configuration complexity and introduces additional failure modes. That argument has merit for teams without deep networking expertise. But the PowerShell NetAdapter module substantially reduces that complexity — the five core RDMA cmdlets (Get-NetAdapterRdma, Enable-NetAdapterRdma, Disable-NetAdapterRdma, Set-NetAdapterRdma) alongside the supporting QoS and RSS commands give you complete lifecycle management. Those scripts can be version-controlled, peer-reviewed, and integrated into your deployment automation without significant additional overhead. The real risk is not enabling RDMA — it is enabling it without the supporting QoS configuration, or enabling it and then having a driver update silently reset the state without anyone noticing. Both of those risks are addressable through deliberate process design, not by avoiding the feature.
Caveats That Cannot Be Glossed Over
RDMA capability is a function of physical hardware, driver version, and firmware. The PowerShell cmdlets cannot enable RDMA on hardware that does not support it — they manage RDMA state on adapters that are already RDMA-capable at the hardware level. Before building RDMA configuration into your deployment automation, verify your specific adapter model and firmware version against the vendor’s RDMA support matrix. Not all NIC models in a vendor’s portfolio support RDMA, and RDMA capability sometimes requires minimum firmware versions that are not current on freshly imaged servers.
Driver and firmware updates can reset adapter properties silently. This is not a NetAdapter module limitation — it is hardware vendor behavior that requires process-level compensation. Include Get-NetAdapterRdma verification in your post-update validation procedure alongside standard connectivity and throughput tests. This single addition to your maintenance checklist is the most effective long-term protection against the configuration drift that makes RDMA issues difficult to diagnose after the fact.
RDMA also has security architecture implications in shared and multi-tenant environments. Direct memory access between adapters operates below the OS isolation boundary in certain configurations — which warrants careful consideration where network segmentation is part of your security model. The NIST Zero Trust Architecture guidelines provide a framework for evaluating implicit trust between network endpoints, which applies directly to RDMA-enabled storage fabrics. The CIS Benchmarks for Windows Server provide a hardening baseline that your RDMA network segments should be measured against as part of any security posture review.
Finally, and this applies specifically to RoCE environments: if your network switches do not support DCB with Priority Flow Control, you cannot run RoCE correctly regardless of what the server-side PowerShell configuration looks like. This gap appears most often when RDMA is being retrofitted into an existing network rather than designed in from the start — the server adapters are upgraded, the NetAdapter configuration is applied, but the switch fabric predates the DCB requirements that RoCE depends on. Verify the full path, not just the adapter state, before declaring an RDMA deployment complete.
A Practical Commissioning and Review Sequence
Whether you are commissioning a new Windows Server 2025 environment or reviewing an existing one, this is the sequence that has consistently produced reliable results across the infrastructure projects we have managed at SSE. It is ordered to follow the dependency chain — you verify state before changing it, and you confirm supporting configuration before enabling the primary feature.
Run Get-NetAdapterRdma across all nodes using -CimSession to establish baseline state. Record which adapters are RDMA-capable and which currently have RDMA enabled. Cross-reference adapter names and interface descriptions against your hardware documentation to confirm which adapters are intended for RDMA use — in multi-NIC servers, it is common to have management, VM networking, and storage adapters on the same physical host, and you typically want RDMA enabled only on the storage-dedicated interfaces.
Configure DCB and QoS on the target adapters using Enable-NetAdapterQos before enabling RDMA. Coordinate with your network team to ensure matching PFC configuration on the connected switch ports. This is infrastructure-level coordination that PowerShell alone cannot complete — the switch side must be in place first for RoCE deployments.
Enable RDMA on the target adapters using Enable-NetAdapterRdma, preceded by a -WhatIf run to preview scope, then executed with -PassThru to capture the output. Stage with -NoRestart if the environment requires no-maintenance-window changes, and apply the adapter restart during the next scheduled downtime. Review RSS configuration on adapters handling mixed RDMA and non-RDMA traffic, and verify SR-IOV state if Hyper-V virtual machines need direct adapter access. Run Get-NetAdapterStatistics before and after to establish a documented performance baseline for future comparison.
Add Get-NetAdapterRdma verification across all nodes to your post-update validation checklist. This single process addition provides ongoing protection against the silent state resets that make RDMA configuration drift difficult to detect — the same class of problem that has caused performance incidents in several of the environments we have been brought in to assess after the fact.
If you are building or reviewing a Windows Server 2025 storage or virtualization environment and want an experienced team to assess your network adapter configuration — including RDMA, QoS, RSS, and SR-IOV state — contact us at SSE. We conduct structured infrastructure assessments that give organizations a clear, documented picture of how their production systems are actually configured, not just how they were intended to be configured.


