Additional vMSC Qualifications for VPLEX Metro

During the initial VPLEX Metro qualification for the VMware Metro Storage Cluster (vMSC), VPLEX was qualified with the non-uniform host access mode together with VMware Native Multi-Pathing (NMP). With a recent update to the testing, VPLEX is now supported with the uniform host access mode.

Uniform Access Mode – What is it?

To understand uniform access mode, let us start with what non-uniform access mode is and work our way back to what uniform access mode is.

In the more common VPLEX Metro deployment, a host that is connected to one of the clusters of the VPLEX Metro does not connect to the other cluster. This is referred to as the non-cross connected topology. Here is a representation of this topology:

In this configuration, the hosts can only do I/Os with the VPLEX cluster that the host is connected to. VPLEX through AccessAnywhereâ„¢ presents the same storage on either side (Distributed Virtual Volume). The advantage of such a configuration is that the two sides are isolated. Combined with the VPLEX Witness and VMware HA, this configuration provides for automatic restart of VMs when there is a failure on one of the sides.

vMSC refers to this configuration as ‘non-uniform’ host access configuration. This was the original configuration that was qualified with VPLEX Metro. (If you need more details on this, refer to the Chad Sakac’s Virtual Geek blog here, Scott Lowe’s blog here or Duncan Epping’s Yellow Bricks blog here)

There are certain deployment requirements in which customers would like to further enhance the availability offered by the non-uniform host access configuration. They would like to avoid server restart upon storage failure. In order to accomplish this, there are two implementation requirements:

  1. Hosts need an alternate path to access to the storage (Translation: Hosts connected to one side of a VPLEX Metro need to access the same storage on the second side)
  2. Hosts need to be in a different failure domain than the storage (Translation: Hosts need be able to survive even when the storage might not. Examples of how this is accomplished is via fire-cells within data centers or different floors)

An important side note – one of the rising trends we are seeing is where VPLEX Metro is deployed ‘within’ a data center. In this mode, customers want to protect:

  1. Across two different arrays within a data center
  2. Across equipment on two different floors or firecells
  3. Across two different Vblocks (or other forms of converged infrastructure)

Here is a configuration that delivers on the requirements stated above:

In this configuration mode, using VPLEX Metro, the same storage volume is accessible from both sides. From a host perspective, it sees multiple paths to the same volume. This configuration is referred to as the cross connected configuration. The red dashed paths are referred to as the cross connect paths. When the storage on one side fails (the entire layer from VPLEX down), the VPLEX Witness enables I/Os on the second VPLEX Cluster. From a host perspective, it continues to see the cross connected paths as continuing to be available and as a result, the loss of storage connected to one side of the VPLEX cluster gets converted into a loss of redundant paths from a host perspective. As a result, for this configuration, there is no downtime when there is a storage failure in this configuration.

vMSC refers to this configuration as the ‘uniform’ host access configuration. Post completion of a recent qualification, this configuration is now supported for VPLEX. The VPLEX vMSC

So what’s the catch?

As with most good things in life, there are tradeoffs (And you thought there was no philosophy in storage!!).

For the configuration above, the cross connected paths represent paths of longer latencies than the non-cross connected paths. If the host has to use that path, that would cause the read latency to be longer than it would be in the non-uniform access mode configuration). Another consideration is that if all paths are simultaneously active, then along the cross connected paths, the I/Os may need to traverse twice the cross site latency twice. To mitigate any increase in application latency, VPLEX supports a cross site latency of up to 1 msec RTT in the uniform host access mode.

A second aspect of this is that since the latencies are short, customers have the option of using stretched fabrics. In such configurations, proper care needs to be taken so as to not extend the failures from one side of the fabric to another. Among other things, fault isolation becomes a major design consideration in this configuration mode.

Some other questions / considerations

Here are some questions that we have been seeing from customers and the field.

(1) When is support for PP/VE going to be added through vMSC?

Engineers at VMware and EMC are working towards completing the vMSC qualification with PP/VE. Stay tuned! [Please note that EMC supports the use of PP/VE on VMware (and it is also supported within the base storage qual for VPLEX).]

(2) What if I need an increased latency together with the cross connect topology? OR Can I use fixed path policy on my NMP and use the cross connect topology over greater latencies?

While this is technically feasible, this is not currently a supported configuration. Please work with your account team to file an RPQ for this configuration.

(3) Can I deploy the cross connected topology without deploying the VPLEX Witness?

The benefit of the cross connected topology is the ability of the host to continue running when you lose access to storage on one side. The VPLEX Witness enables I/Os on the surviving side . This is what allows the hosts to continue running on the available alternate paths through the surviving VPLEX cluster. In other words, deploying without the Witness will not yield the core benefit of this topology.

(4) What about additional collateral?

Thought you’d never ask ;-). If you want to dive into this in more depth, here are documents that dive deeper into these and other topics:

A Blog About Clouds and Data Center Technology … Mostly