Dear ReubenTishkoff,
Based on your setup and DiskSpd results, you've identified a significant performance drop (~80–90%) in 4K Q1/T1 workloads inside the VM compared to the host, while other workloads (e.g., Q32T1 and sequential reads/writes) show near parity or acceptable deltas. Your configuration—including NUMA pinning, fixed VHDX with 4K block size, NTFS 64K allocation, and Defender exclusions—is well-optimized for low-latency scenarios.
1. Is the 4K Q1/T1 drop expected in current Hyper-V architecture? Yes, this behavior is consistent with known limitations of the vSCSI stack (StorVSP/StorVSC) in Hyper-V. Small I/O operations with low queue depth are particularly sensitive to virtualization overhead, especially when using virtual disk layers (VHDX) and synthetic storage paths.
2. Are there supported tuning options to reduce small-IO latency? While there are no direct knobs to eliminate the Q1/T1 gap entirely, the following adjustments may help mitigate latency:
Use SCSI Controller Type 0 for minimal overhead
Increase the number of virtual storage queues via registry or PowerShell (limited support)
Review interrupt moderation settings on the host NIC and storage controller
Ensure latest integration services and VM configuration version are applied
Use fixed-size VHDX over dynamic (already in place in your setup)
3. Are there layout best practices beyond VHDX 4K/4K + NTFS 64K? For latency-sensitive workloads, consider:
Pass-through disks (though limited in flexibility)
Direct Device Assignment (DDA) for NVMe devices, which bypasses the virtual storage stack entirely
Avoid placing VHDX on SMB shares or tiered storage unless RDMA is enabled
4. Is DDA the only practical way to narrow the Q1/T1 gap? Currently, DDA remains the most effective method to achieve near-host performance for small I/O workloads. It provides direct access to PCIe devices and eliminates virtualization overhead, but it requires exclusive device access and is best suited for dedicated workloads.
5. Is there an ETA or roadmap for vNVMe or multi-queue virtual storage in Hyper-V? Microsoft has acknowledged the need for multi-queue virtual storage and vNVMe support in Hyper-V. While no public ETA is available, these features are under active consideration for future releases. You can follow updates via the Windows Server Tech Community.
I hope this helps. Just kindly tick Accept Answer that others in the community facing similar issues can easily find the solution. Your contribution is highly appreciated.
Best regards,
Domic Vo