6.1 FAQ
Getting Started
Q1. What are the software requirements for vVols?
A. You will need vSphere 6.0 in addition to equivalent array vendor vVols software. More details on supported array models are available at the hardware compatibility guide page: http://www.vmware.com/resources/compatibility/
Q2. Where can I get the storage array vendor vVols software? A. Storage vendors are providing vVols integration in different ways. Please contact your storage vendor for more details or visit your vendor’s website for more information on vVols integration.
Key Elements of vVols
Q3. What is a Protocol Endpoint (PE)? A. Protocol endpoints are the access points from the hosts to the storage systems, which are created by storage administrators. All path and policies are administered by protocol endpoints. Protocol Endpoints are compliant with both, iSCSI and NFS. They are intended to replace the concept of LUNs and mount points.
Q4. What is a storage container and how it relates to a Virtual Datastore?
A. A storage container is a logical abstraction on to which vVols are mapped and stored. Storage containers are setup at the array level and associated with array capabilities. vSphere will map storage containers to Virtual Datastores and provide applicable datastore level functionality. The Virtual Datastore is a key element and it allows the vSphere Admin to provision VMs without depending on the Storage Admin. Moreover, the Virtual Datastore provides logic abstraction for managing a very large number of vVols. This abstraction can be used for better managing multi-tenancy, various departments within a single organization, etc.
Q5. How does a PE function? A. A PE represents the IO access point for a vVols . When a vVols is created, it is not immediately accessible for IO. To Access vVols , vSphere needs to issue “Bind” operation to a Virtual Provider (VP), which creates IO access point for a vVols on a PE chosen by a VP. A single PE can be the IO access point for multiple vVols. “Unbind” Operation will remove this IO access point for a given vVols.
Q6. What is the association of the PE to storage array? A. PEs are associated with arrays. One PE is associated with one array only. An array can be associated with multiple PEs. For block arrays, PEs will be a special LUN. ESX can identify these special LUNs and make sure that visible list of PEs is reported to the VP. For NFS arrays, PEs are regular mount points.
Q7. What is the association of a PE to storage containers? A. PEs are managed per array. vSphere will assume that all PEs reported for an array are associated with all containers on that array. IE: If an array has 2 containers and 3 PEs then ESX will assume that vVols on both containers can be bound on all 3 PEs. But internally VPs and storage arrays can have specific logic to map vVols and storage containers to PE.
Q8. What is the association of a PE to Hosts?? A. PEs are like LUNs or mount points. They can be mounted or discovered by multiple hosts.
Architecture and Technical Details
Q9. Can I have one PE connect to multiple hosts across clusters?
A. Yes. VPs can return the same vVols binding information to each host if the PE is visible to multiple hosts.
Q10. I use multi-pathing policies today. How do I continue to use them with vVols? A. All multi-pathing policies today will be applied to a PE LUN. This means if path failover happens it is applicable to all vVols bound on that PE. Multi-pathing plugins have been modified not to treat internal vVols error conditions as path failure. vSphere will make sure that if older MPPs won’t claim PE LUNs.
Q11. How many Storage Containers can I have per storage array?
A. It depends on what array is configured, but the number of containers is normally small count in (few tens in number). There is a limit of 256 storage containers per host.
Q12. Can a single Virtual Datastore span different physical arrays? A. No. In 2015, vSphere will not support this.
Q13. We have the multi-write VMDK feature today? How will that be represented in vVols?
A. A vVol can be bound by multiple hosts. vSphere provides multi-writer support for vVols.
Q14. Can I use VAAI enabled storage arrays along with vVols enabled arrays? A. Yes. vSphere will use VAAI support whenever possible. In fact, VMware mandates ATS support for configuring vVols on SCSI.
Q15. Can I use legacy datastores along with vVols? A. Yes.
Q16. Can I replace RDMs with vVols? A. Yes, with the release of vSphere 6.7, vVols do support SCSI-3 reservations. This means clusters like Microsoft WSFC are now supported. If an application requires direct access to the physical device, vVols are not a replacement for pass-thru RDM (ptRDM). vVols are superior to non-pass-thru RDM (nptRDM) in a majority of virtual disks use cases.
Q17. Can I use SDRS/SIOC to provision vVols enabled arrays? A. No. SDRS is not supported. SIOC will be supported as we support IO scheduling policies for individual vVols.
Q18. Is VASA 2.0 a requirement for vVols support? A. Yes. vVols does require VASA 2.0. The version 2.0 of the VASA protocol introduces a new set of APIs specifically for vVols that are used to manage storage containers and vVols. It also provides communication between vCenter, hosts, and the storage arrays.
Q19. How do vVols affect backup software vendors? A. vVols are modeled in vSphere exactly as today’s virtual disks. The VADP APIs used by backup vendors are fully supported on vVols just as they are on vmdk files on a LUN. Backup software using VADP should be unaffected.
Q20. Is vSAN using some of the vVols features under the covers? A. Although vSAN presents some of the same capabilities (representing virtual disks as objects in storage, for instance) and introduces the ability to manage storage SLAs on a per-object level with SPBM, it does not use the same mechanisms as vVols. vVols use VASA 2.0 to communicate with an array’s VASA Provider to manage vVols on that array but vSAN uses its own APIs to manage virtual disks. SPBM is used by both, and SPBM’s ability to present and interpret storage-specific capabilities lets it span vSAN’s capabilities and vVols array’s capabilities and presents a single, uniform way of managing storage profiles and virtual disk requirements.
Q21. Can you shrink the Storage Container as well as increase on the fly? A. Storage Containers are a logical entity only and are entirely managed by the storage array. In theory, there’s nothing to prevent them from growing and shrinking on the fly. That capability is up to the array vendor to implement.
Q22. Where are the Protocol Endpoints (PE) setup? In the vCenter with vSphere web client? A. PEs are configured on the array side and vCenter is informed about them automatically through the VASA Provider. Hosts discover SCSI PEs as they discover today’s LUNs; NFS mount points are automatically configured.
Q23. Where are the array policies (snap, clone, replicate) applied? A. Each array will have a certain set of capabilities supported (snapshot, clone, encryption, etc) and these are defined at the storage container level. In vSphere, a policy is a combination of multiple capabilities and when a VM is provisioned, recommended datastores that match a policy are presented.
Q24. From the vSphere Admin, is a Virtual Datastore accessed like a LUN? (storage browser, vm logs, vm.vpx config file, etc) A. You can browse a vVols Datastore as you would browse any other datastore (you will see all config vVosl, one per VM). It’s these config vVols that hold the information previously in the VM’s directory, like vmx file, VM logs, etc.
Q25. Is there a maximum number of vVols or maximum capacity for an SC/PE? A. Those limits are entirely determined by the array vendor’s implementation. The vVols implementation does not impose any particular limits.
Q26. Can you shrink the PE as well as increase on the fly? A. PEs don’t really have a size - they’re just a conduit for data traffic. There’s no file system created ON the LUN, for instance - they’re just the destination for data I/O on vVols that are bound to it.
Q27. Does the virtual disk vVol contain both the .vmdk file and the -flat.vmdk file as a single object? A. The .vmdk file (the virtual disk descriptor file) is stored in the config vVol with the other VM description information (vmx file, log files, etc.). The vVol object takes the place of the -flat file you mention - the vmdk file has the ID of the vVol.
Q28. How many vVols will be created for a VM , do we have a different vVols for flat.vmdk and different vVol for .vmx files etc? A. For every VM a single vVol is created to replace the VM directory in today’s system. That’s 1 so-called “config” vVol. Then there’s 1 vVol for every virtual disk, 1 vVol for swap if needed, and 1 vVol per disk snapshot and 1 per memory snapshot. The minimum is typically 3 vVols/VM (1 config, 1 data, 1 swap). Each VM snapshot would add 1 snapshot per virtual disk and 1 memory snapshot (if requested), so a VM with 3 virtual disks would have 1+3+1=5 vVols. Snapshotting that VM would add 3+1=4 vVols.
Q29. From a vSphere perspective, are snapshots allowed to be created at the individual VMDK (vVol) level? A. The APIs provide for snapshots of individual vVols but note that the UI only provides snapshots on a per-VM basis, which internally translates to requests for (simultaneous) snapshots of all a VM’s virtual disks. It is not per LUN, though.
Q30. Will PowerCLI provide support for native vVols cmdlets? A. Yes.
Q31. Is there documentation that shows exactly which files belong to each type of vVols object? A. There are no “files”. The vVols are directly stored by the storage array and are referenced by the VM as it powers on. There is metadata information linking a vVol to a particular VM so a management UI could be created for the vVols of a VM.
Q32. Is there any NFS or SCSI conversions going on under PE or array side? A. Storage Containers are configured and the PEs set up to use NFS or SCSI for data in advance. The NFS traffic is unaltered NFS using a mount-point and a path. SCSI I/O is directed to a particular vVol using the secondary LUN ID field of the SCSI command.
A. Yes. SPBM will separate all datastores (VMFS, VSAN, vVols, NFS) into “Compatible” and “Incompatible” sets based on the VM’s requirement policy.