A Client Call That Could Have Gone Better
One of our managed healthcare accounts called in after a failed attempt to enable Fault Tolerance on a production SQL VM. The VM had a 2.5TB data disk. vSphere threw an error, and nobody on their internal team knew why. Took us about 30 seconds to identify the problem — the fault tolerance limitations in vSphere 6.7 are strict and poorly advertised. The 2TB VMDK ceiling is one of the first things that trips people up.
After that engagement, I put together an internal checklist we now run before enabling FT on any client VM. Figured it was worth sharing here since the official docs bury half of these items across multiple pages.
The Checklist: Will FT Actually Enable on This VM?
Run through each item before you attempt to turn on Fault Tolerance. A single fail kills the whole operation.
1. VMDK Size — Under 2TB
Pass/Fail: Every attached VMDK must be smaller than 2TB.
This applies regardless of your vSphere license tier — Standard, Enterprise Plus, or vSphere+. The 2TB disk size cap is universal across all of them. If your VM has even one virtual disk at or above 2TB, FT will not enable. Period.
The fix is usually splitting the large disk into multiple smaller VMDKs, but that means OS-level volume management changes. Not always trivial on a production box.
2. No Physical RDM Disks
Pass/Fail: VM must have zero physical Raw Device Mappings.
FT cannot protect VMs with physical RDM disks. Virtual RDMs in physical compatibility mode are also blocked. If you inherited an environment where someone mapped SAN LUNs directly to VMs using RDMs — and we see this constantly when we take over from another vendor — those VMs are not FT candidates.
You will need to migrate the data off the RDM onto a standard VMDK before FT becomes an option. Storage vMotion handles this in most cases, but test it in a maintenance window first.
3. Virtual Disk Count — 16 Maximum
Pass/Fail: No more than 16 virtual disks attached.
Enterprise Plus and vSphere+ allow up to 16 virtual disks per FT-protected VM. Standard edition caps you at 8. Check your license before assuming 16 is your ceiling.
4. vCPU Count — 8 Maximum
Pass/Fail: VM must have 8 or fewer vCPUs.
Standard edition limits you to 2 vCPUs per FT VM. Enterprise Plus and vSphere+ allow up to 8. We had a logistics company client running a 12-vCPU application server they wanted protected with FT. That VM needed to be right-sized down to 8 vCPUs first, which meant performance testing to make sure the app could handle it.
5. Memory — 128GB Maximum
Pass/Fail: VM RAM must not exceed 128GB.
This one rarely comes up for us, but if you are running large database or analytics workloads, you will hit it.
6. Per-Host Limits
Pass/Fail: No more than 4 FT VMs per ESXi host. No more than 8 total FT vCPUs per host.
Both primary and secondary VMs count toward the four-VM limit. So if you have two FT-protected VMs with their primaries on Host A, that host already has two of its four slots used — and the secondary VMs on other hosts consume slots there too. Plan your cluster layout accordingly.
7. No Snapshots
Pass/Fail: VM must have zero existing snapshots, and you cannot create manual snapshots while FT is active.
The one exception: disk-only snapshots created by backup software are supported. So Veeam and similar tools can still do their thing. But if someone left a manual snapshot on the VM from three months ago — and this happens more often than anyone admits — FT will refuse to enable until you consolidate or delete it.
8. No Unsupported Devices
Pass/Fail: No USB devices, parallel ports, or SATA controllers attached.
USB passthrough and parallel ports are the usual offenders. SATA controllers are less common but will still block FT. Also, if the VM has a CD-ROM mapped to a physical or remote device, that needs to change to an ISO on a shared datastore.
9. No Virtual Volumes (VVols)
Pass/Fail: VM storage must not be on a VVol datastore.
FT-enabled VMs cannot use Virtual Volume datastores. SPBM policies are also incompatible with FT. If you are running VVols, you will need to Storage vMotion the VM to a traditional VMFS or NFS datastore before enabling FT.
10. No CPU Affinity
Pass/Fail: VM must not have CPU affinity rules configured.
Affinity pins a VM to specific physical cores. FT needs the flexibility to run the secondary VM on a different host entirely, so affinity rules are incompatible.
What We Found Across Client Environments
We ran this checklist across four client environments last quarter as part of our IT assessment engagements. The results were consistent:
The 2TB VMDK limit was the number one blocker. Most environments had at least one VM with an oversized disk that someone assumed could be FT-protected. RDM issues were second, mostly in environments migrated from physical infrastructure where direct LUN mappings were carried over without cleanup.
Snapshot accumulation was third. Old snapshots left behind by failed backup jobs or manual troubleshooting sessions that nobody cleaned up.
My take: FT in vSphere 6.7 is useful for a narrow set of workloads. Small, critical VMs — domain controllers, lightweight app servers, DNS. Once you start pushing past a few hundred gigs of disk or more than a handful of vCPUs, you are better off looking at application-level clustering or managed backup with aggressive RPOs instead.
Quick Reference Table
| Restriction | Standard | Enterprise Plus / vSphere+ |
|---|---|---|
| Max vCPUs per FT VM | 2 | 8 |
| Max virtual disks | 8 | 16 |
| Max disk size | 2TB | 2TB |
| Max RAM per FT VM | 128GB | 128GB |
| Max FT VMs per host | 4 | 4 |
| Max FT vCPUs per host | 8 | 8 |
Where to Go From Here
Note: some of these limits changed in later vSphere versions. If you are on 7.x or 8.x, verify against the current vSphere Availability Guide before assuming these numbers still apply.
DRS behavior is also limited with FT VMs, and vMotion cannot place both primary and secondary VMs on the same host. Factor that into your cluster capacity planning — FT effectively doubles the resource footprint of every protected VM.
If you are running into any of these restrictions and need help evaluating alternatives, reach out to us. We have worked through this exact scenario more times than I can count.
Related reading: Using Autoruns to Audit Every Windows Autostart Location | Windows Admin Center: Browser-Based Server Management Done Right | Forensic Triage on Windows: Rapid Evidence Collection

