Amidst all the fun of EMC World, there is some really important news for the VPLEX Metro and VMware community that I wanted to ensure was not lost.
What was supported until now
Prior to this change, the official vSphere Metro Storage Cluster (vMSC) support stance was that VMware HA and vMotion was supported until 5 msec RTT (Round Trip Time). There was an additional wrinkle. vMotion was supported up to 10 msec RTT with your vSphere Enterprise Plus licensing. And VPLEX Metro supported it. However, VMware HA was not supported up to 10 msec RTT.
What has changed
The big change that happened is that now as a part of vMSC both VMware HA and VMware vMotion have now been extended to be supported with VPLEX Metro up to 10 msec RTT. This qualification has been completed with both PowerPath/VE as well as with NMP (Native Multi-pathing) for the non-uniform access mode (i.e. non-cross connected configuration). This support is available starting with vSphere 5.5 and GeoSynchrony 5.2.
The VMware Knowledge Base article is updated here.
Many thanks to the VMware Ecosystem Engineering team as well as key technical leaders on both the VMware and EMC sides for helping drive this. This has been a long time coming.
On April 4th, 2014, as part of the Data Protection and Availability Division (DPAD) launch, there were three VPLEX and RecoverPoint items that were launched or GAd:
VPLEX Virtual Edition – Availability late Q2
MetroPoint Topology – Joint capability of VPLEX and RecoverPoint – Availability Late Q2
VPLEX Integrated Array Services – Available now
This is the first in a series of posts to walk through what was launched / delivered.
The drivers towards a VPLEX Virtual Edition
Data center infrastructure is undergoing a massive shift. Virtualization in the data center has had a profound impact on customer expectations of flexibility and agility. Especially as customers get to 70+% virtualized, they have the potential to realize tremendous operational savings by consolidating management in their virtualization framework. In this state, customers typically do not want to deploy physical appliances and want everything handled from their virtualization context. Similar changes in networking and storage have meant that the basic infrastructure is now completely in software running on generic hardware. This is the software defined data center. VPLEX has been no stranger to this conversation. Especially given the very strong affinity of VPLEX to VMware use-cases, customers have been asking us for a software only version of VPLEX. That is precisely what we have done. This past week, we launched VPLEX Virtual Edition – with a GA towards the end of Q2.
What is the VPLEX Virtual Edition and what does it do?
The VPLEX Virtual Edition (VPLEX/VE) is a vApp version of VPLEX designed to run on an ESX Server Environment to provide continuous availability and mobility within and across data centers. We expect this to be the first in a series of virtual offerings. In comparison to the appliance, all the VPLEX directors are converted into vDirectors. For the first release, the configuration we support is called the ‘4×4’ – this will support four vDirectors on each side of a VPLEX Metro. From a configuration standpoint, that is the equivalent of two VPLEX engines on each side of a VPLEX Metro cluster. Each side of VPLEX/VE can be deployed within or across data centers up to 5 msec apart.
VPLEX/VE supports iSCSI for front-end and back-end connectivity. For the initial release, we have decided to support only the VPLEX Metro equivalent use-cases. Most of the VPLEX Local related use-cases can be addressed by a combination of vMotion and storage vMotion. To list the use-cases:
The ability to stretch VMware HA / DRS clusters across data centers for automatic restart and protecting VMs across multiple data arrays
Load balancing of virtual machines across data centers
Instant movement of VMs across distance
From a performance perspective, VPLEX/VE is targeted up to a 100K IOPS workload. Obviously, the true performance will depend on your workload. The deployment is designed to be customer installable from the get go. There is an installation wizard that guides you all the way through the installation. When GAd, please refer to the release notes to determine what kind of ESX Servers are supported for VPLEX/VE. The vDirectors need to be loaded onto separate ESX Servers such that no two vDirectors are deployed on the same ESX server. This is done so as to give the system maximum availability. Running application VMs on the same ESX server as that running the vDirector is supported. This means that you should be able to use your existing ESX servers (subject to the minimum requirement that will be established for the vDirectors).
The way that an I/O will flow is from the application VM (via iSCSI) to the VPLEX/VE vDirector VM and from there to the iSCSI array connected to VPLEX/VE. Speaking of which, right out of the chute, we support VNXe arrays. We will add other iSCSI arrays over time.
One of the more interesting changes that we have made with VPLEX/VE is the way that it is managed. Since VPLEX/VE is tailored for ESX servers only, our management interface to VPLEX/VE is completely through the vSphere Web Client. Here are some screenshots of how VPLEX/VE management looks. The coolest part for me is that you can go from creating your VMs, setting up an HA cluster, all the way to creating a distributed volume all within the vSphere Web Client. _VERY_ nifty! In addition, we have now enabled VPLEX/VE events and alarms to show up in the vCenter Event Viewer. For all practical purposes, this is a seamless vApp designed for your vSphere environments.
When a distributed volume is provisioned for VPLEX/VE, it is configured as a vmfs 5 volume and made available as a resource to vCenter.
With VPLEX/VE, we have had the opportunity to do a lot of things differently. One of our guiding principles was to not think of it as a storage product but rather to think of it as a product designed for VMware environments and targeted to an ESX Administrator. Naturally, I cannot wait to see this get into our customers hands and to see whether we have hit our marks and what adjustments are needed.
Equally importantly, this is a strategic imperative within EMC. You can expect to see a lot more of our product portfolio embarking on the software defined journey. There are a lot of intersects within the portfolio that we have only begun to explore (HINT: Composing software is a lot easier than composing hardware!).
Frequently Asked Questions
Since launch, I have seen a ton of questions on twitter, on internal mailing lists and via people directly or indirectly reaching out to me. So, here are the collated answers:
Is VPLEX/VE available right now?
A: VPLEX/VE will GA towards the end of Q2.
Will VPLEX/VE support non-EMC arrays?
A: As with VPLEX, we expect to qualify additional EMC and non-EMC arrays over time based on customer demand. Expect new additions fairly quickly after GA
Will I be able to connect VMs from ESX clusters that are not within the same cluster as the one hosting VPLEX/VE?
A: Yes No
Will I be able to connect non-VMware ESX hosts to VPLEX/VE?
A: At this point, we only support VMware iSCSI hosts connecting to VPLEX/VE. This is one of the reasons the management has been designed within the vSphere Web Client
Can I connect VPLEX/VE with VPLEX?
A: VPLEX/VE is deployed as a Metro equivalent platform (i.e. both sides). Connecting to VPLEX is not supported. If there are interesting use-cases of this ilk, we would love to hear from you. Please use the comments section below and we can get in touch with you.
Is RecoverPoint supported with VPLEX/VE>
A: Not today. So, I am explicit – the MetroPoint topology which also launched last week is also not supported with VPLEX/VE
Is VPLEX/VE supported with ViPR?
A: At GA, ViPR will not support VPLEX/VE. Both the ViPR and VPLEX/VE teams are actively looking at this.
Does VPLEX/VE support deployment configurations other than a 4×4?
A: Currently, 4×4 is the only allowed deployment configuration. Over time, we expect to support additional configurations primarily driven by additional customer demand.
Will VPLEX/VE be qualified under vMSC (vSphere Metro Storage Cluster)?
If you are interested in a Cliff’s note version of this, here is a short video that Paul and I did to walk through the virtual edition:
One of the fun parts of my job is interacting regularly with customers at various forums. And often, these discussions result in insights about what the product needs to do, where we need to focus. Once in while, it tells us about what we are communicating out, what our customers and field hear. Here was one such exchange at an EBC this past week.
ME: VPLEX is the best thing since sliced bread (paraphrasing my hyperbole here :-)) CUSTOMER: Does VPLEX do migrations? ME: Yes, VPLEX does Mobility and Availability CUSTOMER: Understand and we are really excited about that. However, can VPLEX allow me to refresh my storage ME: (confused) Yes, it can CUSTOMER: Meaning if I have VPLEX in place, I can bring a new storage array in and migrate my current array to that new array without any disruption to the host? ME: (Getting the hang of what’s going on) Yes CUSTOMER: And do the arrays need to be of the same type? ME: No. They can be different. CUSTOMER: Can VPLEX do this for non EMC arrays as well? ME: Yes. We have over 45 different array families supported and more families are being added every month. CUSTOMER: Nothing in the product details (white papers / collateral) describes this use-case … ME: (Sheepishly) Valid feedback – I will take that back to my team to figure out what we can do about this.
For the customer who helped bring this to my attention (and you know who you are) – many thanks. You were TOTALLY right. Because here is what happened that very day. I got a note from our field team asking about which products to use for a migration activity. And the same discussion that we had in the morning happened as bits on the wire later that day. So, independent of what we say or dont say about migrations, it is clear that the message is not being heard as much as it should be. This post is to at least begin to set the record straight on VPLEX and Migrations.
So, VPLEX definitely does do migrations. There are two variants of the use-case.
Tech-refresh of an array
Here a new array is brought in (either because a lease on an existing array has run out or because a new array has been purchased). Volumes from an existing array or arrays are migrated onto the new arrays. Typically, the older arrays are then retired or repurposed for other usage.
Load balancing across arrays
Here there are multiple arrays behind VPLEX. Either because of capacity reasons or performance reasons or the need for some specific capability, volumes are moved from one array to another. Both arrays continue to be kept in service.
VPLEX Local can be used to accomplish both use-cases above. VPLEX Metro adds one more variant to the above use-case(s) – Migrating across arrays across data centers. In other words, VPLEX Metro extends the pool of arrays that you can manage beyond the confines of your data center.
Specifically, here are things to remember about VPLEX migrations:
VPLEX migrations are non-disruptive. In other words, the application does not need to be stopped in order to migrate storage.
VPLEX is fully heterogeneous. It supports both EMC and non-EMC arrays. My standard note to customers is always refer to the VPLEX Simple Support Matrix on powerlink.
The source array and target array of the migration can be any of the supported set of arrays. In other words, you do _NOT_ need to migrate from like to like.
How do migrations in VPLEX work?
Here is the basic process of migrations within VPLEX:
The new array is connected and exposed to VPLEX and volumes from the new array are exposed to VPLEX
From here, you have two options (really dependent on the scale of the operations):
Migrate on a volume by volume basis
Migrate as a batch (especially useful for the tech refresh piece)
From then on, VPLEX does its thing and ensures that the volumes on the two arrays are in sync. During this time, I/Os from the host continue. As far as the host is concerned, it continues to see volumes from VPLEX. Host READ I/Os are directed to the source leg. Host WRITE I/Os (if the section has been copied over onto both legs) are sent to both legs of the mirror. After both volumes are in complete sync, I/Os continue until you decide to disconnect the source volume. It is worthwhile pointing out that even after the volumes are in sync, you have the option to remove the destination volume and go back to the source. From that point on, you make the call on when to disconnect the source volume.
From the host standpoint, quite literally, it does not know that anything has changed.
More questions that I get about migrations:
Can I control the amount of impact on my host I/Os?Before answering this question, it is important to understand why there may be impact (if any). FWIW, this explanation is true of all storage virtualization solutions doing migrations. Anyone that tells you otherwise is factually incorrect.The host connected to VPLEX has a fixed set of paths to the virtual target presented by VPLEX. The same for the target arrays connected to VPLEX on the backend. Think of these as fixed capacity pipes carrying your I/Os from the front-end to the back-end of VPLEX. Along these same pipes, VPLEX needs to perform copy I/Os (read from the source leg and write to the target leg). So, in a fixed pipe, a migration adds additional I/Os. In other words, some of the capacity in that I/O pipe gets consumed for migrations. How much of that can impact the host depends on how full the I/O pipe was in the first place.To account for the case when the pipe is completely full, VPLEX gives you three knobs that allow you to select the rate of migrations (ASAP / High / Medium / Low). As you can imagine, the higher the copy rate, the higher the impact on host I/Os. So, if you are concerned about host I/O impact, then you should start with the copy rate to low and increase the rate from there.
What should my licensing model be if I have to migrate from old storage to new storage?This is more relevant to the tech refresh variant of the use-case. We heard feedback from a _LOT_ of customers about them wanting to use VPLEX for migrations. However, they were balking at having to license the storage that they were going to migrate from (i.e. the source array). To help with this, we have introduced a free 180-day migration license for VPLEX. This migration license is available with the purchase of an EMC array. So long as VPLEX is licensed on this new array, you have a 180-day license for unlimited capacity to migrate onto the array behind VPLEX. This is compelling if you are especially going through a storage consolidation phase in your data center.
Along the way, we have had some tremendous customer success stories with respect to migrations – right from customers who have reduced their migration times by 90+% to customers who no longer schedule maintenance time for migrations nor do they involve external professional services. We clearly have a lot of work to do with respect to educating everyone about VPLEX and its role in migrations. But this should be a good starting point for the conversation.
A Blog About Clouds and Data Center Technology … Mostly