Category Archives: EMC Ecosystem

ViPR 2.0: New use-cases to support VPLEX and RecoverPoint

The GA of ViPR 2.0 was announced in time for EMC World. While there are significant announcements in ViPR 2.0, I will focus on the pieces that benefit VPLEX and RecoverPoint in this new integration.

A quick recap of what was supported prior to the 2.0 release is available here.

Support for Snaps and Clones on arrays behind VPLEX

In the 2.0 release, ViPR now supports full life cycle management of Snaps and Clones on arrays behind VPLEX. This allows customers to get a single pane of glass management function for snaps and clones. This seamless experience makes it easy for customers to take advantage of the performance and scale of these capabilities on underlying arrays and not compromise on the ease of use needed to make this capability work. Here is a demo of this capability.

Setting up a Local Mirror (RAID-1)

Another addition made in the ViPR 2.0 release is the ability to add a local mirror leg to a given virtual volume for the purposes of creating a RAID-1. This allows the volume to be protected across arrays. Here is a demo of what this capability is:

VPLEX and RP Protection

One of the big additions with the ViPR 2.0 release was common management for RecoverPoint within the VPLEX context. This allows RecoverPoint protection for VPLEX volumes to be accomplished through the same user interface. Combined with the end-to-end VPLEX provisioning through ViPR, you can now accomplish complete VPLEX provisioning with RecoverPoint. Please note that ViPR 2.0 does not support the MetroPoint topology. This is targeted for future releases.

Updated Provisioning use-case

Since ViPR 1.0, the provisioning for VPLEX has been updated. Here is a demo of the updated provisioning workflow.

Talkin’ about VPLEX and RecoverPoint Part 4

The past three editions of these have been very popular. Our marketing and CSE team has created some new videos in support of the Q2 launches for VPLEX and RecoverPoint. So here are twelve videos for you to dig into.

  1. Why VPLEX for VMware Environments: Don Kirouac does an excellent job explaining how VPLEX integrates with VMware environments.
  2. Why VPLEX for Oracle RAC: Don Kirouac from the Corporate Systems Engineering team talks about the integration between Oracle RAC and VPLEX Metro to deliver continuous availability
  3. VPLEX with XtremIO: Charlie Kraus from the Product Marketing team explains how VPLEX delivers value to XtremIO environments
  4. ViPR with VPLEX and RecoverPoint: Devon Helms from the Product Marketing team explains how provisioning for VPLEX and RecoverPoint can be made simple with the ViPR Controller.
  5. Why VPLEX for SAP: Jim Whalen from the Solutions Marketing Team explains how VPLEX can help deliver SAP Application Availability.
  6. Why VPLEX for Microsoft Hyper-V Environments: Charlie Kraus talks about how VPLEX integrates with Microsoft Hyper-V environments to deliver mobility and availability
  7. VPLEX with Vblock: Charlie Kraus delves into how VPLEX integrates with and provides value to a Vblock environment.
  8. VSPEX Solutions for VPLEX and RecoverPoint: Karl Connolly from the VSPEX Marketing Team

  9. MetroPoint topology: Paul Danahy and I walk through the benefits and value propositions of the MetroPoint topology
  10. VPLEX Virtual Edition: Paul Danahy and I introduce the VPLEX Virtual Edition solution and why we think this is such a game changer
  11. Simplified Provisioning with VPLEX: Paul Danahy and I talk through how VPLEX Integrated Array Services simplifies provisioning with VPLEX
  12. EMC AppSync for RecoverPoint: Parag Pathak from the AppSync Marketing team and Devon Helms talk about the integration between AppSync and RecoverPoint to deliver application consistent protection

ViPR with VPLEX

ViPR was launched with tremendous fanfare at EMC World last year (how time flies!). The product went GA in Sept 2013.

The key premise behind ViPR is that data center management has become too complex. As obvious as this problem is, it is a herculean task to address. I doff my hat to the ViPR team. They have taken a very complex challenge and built a product that they can be justifiably proud of.

Over the last few months, a number of customers have deployed VPLEX together with ViPR and used ViPR to simplify their management infrastructure. Our team has put together some demos to help explain how ViPR and VPLEX integrate.

We will be adding voice-overs at a later point in time but it seemed useful to make these available to customers to help them understand the value of VPLEX with ViPR.

Configuring VPLEX within ViPR

This demo shows you how to configure VPLEX within the ViPR context. ViPR takes over after the basic configuration of VPLEX (i.e. set up from a network accessibility standpoint).

  1. A VPLEX cluster gets configured as a virtual array within ViPR. For a VPLEX Metro, this equates to creating two virtual arrays.
  2. From there, you need to expose the network elements from the SAN to the specific VPLEX cluster.
  3. You can now create virtual pools describing what type of storage to provision. Based on the SAN exposed, you get options for which storage can be exposed to which VPLEX cluster. Based on the configuration of the pools, you can assign different properties to VPLEX pools

Note that this is a one-time configuration for a given virtual pool. This sets you up for end-to-end provisioning!

Provisioning VPLEX within ViPR

This now operationalizes what was set up in the prior demo. The first step relates to selecting the virtual array and a virtual pool and then creating a distributed volume. THe next step involves taking this volume and exposing this volume to the host. No zoning, no moving between multiple GUIs, all available with ease.

Deprovisioning VPLEX within ViPR

This is the flip side to the prior demo. Here the volumes that are exposed to the host are deprovisioned. Again, same paradigm as before. The orchestration happens through the ViPR controller and it is all in one interface.

Migration of Pools through ViPR

This takes the migration use-case and converts it into the catalog view. The change pool catalog request results in migration of volumes from one array to another. THe orchestration is at the pool level so you can migrate from one array or one tier to another.

This is just the beginning – we are looking at more complex use-cases to deliver a seamless experience to our end customers. You will hear more about this in the near future. What do you think?

PowerPath: Auto standby for VPLEX

Autostandby as a capability has been available for powerpath for over a year and a half. Must be something in the zeitgeist but all of a sudden, I have seen a couple of threads from customers and the field. And these threads have covered the entire range – from customers who are positively gushing about this capability, to questions about how this works, to operational questions like what tweaks are possible or not possible.

The background behind autostandby

We started down the autostandby road with some crucial observations:

    Most host I/O operations in a sequence are correlated to each other. In other words, random I/O workloads, while they do exist, are rare during customer operations.

(And yes, I realize that any generalization is dangerous territory. So, remember, we are following the 80:20 rule here).

    VPLEX has a read cache. To take advantage of this, you want to maximize the likelihood that read-type I/Os encounter cache hits, thereby reducing the latency for these I/Os.

Translation: If you combine the two observations above, then, for better performance, you want I/Os from a given host to a given volume to be directed to a given set of directors as much as possible.

Finally, let’s now bring the distance component into this. Particularly, the focus here is on the cross connect (Additional vMSC Qualifications for VPLEX Metro). In the case of the cross connected configuration, there is a latency advantage to having I/Os be directed to the local cluster. Otherwise, I/Os get subjected to the cross site round trip latency penalty. By the way, this is one of the reasons that we have chosen to restrict the support latency envelope for cross connected configurations to 1 msec RTT.

The solution

Working with the PowerPath team, we set out trying to address the design goals outlined above. Now PowerPath has a mechanism to address paths that should not be used for multipathing purposes. This is where paths get set to manual standby. That designates these paths (if alive) as usable once all the primary (non-standby) paths have failed.

For the VPLEX Metro cross connected environments, the designation of which path is on standby will depend on where the host is located corresponding to the VPLEX cluster. The host paths connected to the local VPLEX Cluster will be the *active* paths whereas those connected to the remote VPLEX Cluster will be the *standby* paths. As a result, the path setting needs to be automatic and at scale across all hosts.

How does the solution work?

A lot of the recent questions have been focused on how the algorithm for path selection works. So at a high level, here goes:

  • PowerPath measures the latency of SCSI INQ commands issued to each path
  • Determine the minimum path latency associated with each VPLEX cluster / frame
  • The VPLEX cluster / frame with the lowest latency is the designated as the preferred cluster.
    1. Each host sets the preferred cluster independently. So, each host affinitizes correctly to the appropriate VPLEX Cluster
    2. If the delta between the minimum latency between clusters is zero, the preferred path designation is applied to one cluster or the other
  • The paths associated with the preferred VPLEX cluster to active mode.User set active/standby always takes precedence over auto selection. So, if those paths have been previously set manually to standby, those settings will not be overruled.
  • The paths associated with the non-preferred VPLEX cluster are set to autostandby – the same caveat as the previous bullet applies
  • PowerPath versions where autostandby for VPLEX is supported

    Here are the minimum versions where autostandby for VPLEX is supported:

  • VMware: PP/VE 5.8
  • Linux: PP 5.7
  • Windows: PP 5.7
  • AIX: PP 5.7
  • HPUX: PP 5.2
  • Solaris: PP 5.5
  • Frequently Asked Questions

    For a given distributed volume, if there are multiple paths on a given cluster which is chosen as the preferred cluster, do all paths get utilized?

  • Yes.
  • What is the frequency of the path latency test? What is the trigger for the path latency test?

  • Path latency is evaluated for autostandby at boot time (if autostandby is enabled) or during runtime when the feature is turned from off to on or when a user issues a reinitialize from the command line.
  • What is the minimum latency difference between two paths before which one will be set on autostandby? What is the default? and is this settable?

  • The granularity varies from platform-to-platform (depends on the tick granularity of the OS). However, the granularity is really, really small and is not settable.
  • I have a VPLEX Metro cluster deployment in which the cross connect latency is extremely small. I do not need the autostandby algorithm. Can I turn it off?

  • Yes, you can turn it off. Refer to the PP administrative guide on how to turn it off. Now, here is the counter argument. If you expect your I/Os to have any level of read cache-hits, then it is still a good idea to leave the autostandby algorithm turned on.
  • On failure of all active paths, the standby paths get made active. When the original paths return, does the user have to take any steps to return the configuration back tot he original configuration or does the pathing revert back to the original state>

  • The pathing will automatically revert back to the original state as soon as an active path comes back alive.
  • Note

    PowerPath also has an autostandby mode that has been introduced to enable handling of flaky paths (IOs-Per-Failure autostandby). This blog is focused on the VPLEX portion of auto standby (referred to as the proximity based autostandby).

    Mobility and Availability Go Xtrem

    Have you recently heard about this new all flash array from EMC? Might have gone past your RSS feeds, your twitter timelines, your blog rolls and your press release markers.

    XtremeIO

    Of course I am kidding. Unless you have been hiding under a rock or data storage is not a meaningful technology category for you, you could not possibly have missed the tremendous launch that XtremIO just had. On second thoughts, even if you were hiding under a rock, the EMC marketing team would have found a way to get to you. The reception from customers, partners and competitors to the XtremIO launch has been overwhelming. Customers have been raving about the XtremIO technology, partners are excited to sell XtremIO. Competitors – well, let’s just say that it has been interesting to say the least – there were twitter feuds, ad wars, positioning conversations, good natured ribbing and some downright FUD. And so I am above board, I am sure we have done our fair share of all of the above. Keeps life fun and interesting in the tech space for all of us.
    Itzik covers XtremIO in all its gory glory in his blog posts
    All of these are highly recommended reads if you want to learn about XtremIO.
    For my part, I will focus this blog on the intersection between VPLEX and XtremIO. I am already seeing a ton of interest in our customer base and our field for this combination. Part of my focus is going to be to clarify what use-cases we are seeing customers use VPLEX in front of XtremIO for as well as to answer some of the questions we are getting.

    Use Cases for XtremIO and VPLEX

    Load balancing / Operational Flexibility

    This is one we have seen a lot of customers use VPLEX with XtremIO for.
    I will be the first one to admit – there are customers who put all their workloads on all flash arrays and there are customers that do not. If you are in the first category, this use-case does not apply that much to you. If you are in the second bucket (which is the overwhelming majority of the customers I talk to) then, you are deploying some of your workloads on all flash arrays and most on hybrid or non-flash arrays. In this mode, customers have workloads that belong on flash but not all the time. In other words, they have workloads that are temporarily resident on flash before moving back to the hybrid array tiers. In other words, these workloads have a temporal performance need. Because of this, customers put VPLEX in front of XtremIO combined with other arrays (we OBVIOUSLY love it when these other arrays are VMAX/VNX but they do not have to be). The workload largely resides on the non-all-flash-array and then moves to the all-flash array temporarily and then is moved back to the non-all-flash array once the associated performance need diminishes. We typically see this with IT shops that operate on storage charge back models based on SLAs. This way, the charge back costs are kept as low as possible.

    Cross Array Availability

    This use-case is certainly not unique to all flash arrays by any means. However, customers are increasingly using VPLEX as a cross array data protection tool. The value of doing this is that if you happen to need downtime on one array (either planned or unplanned), then with VPLEX you can mirror volumes across two arrays and in that way accomplish a higher level of protection. With flash arrays, we see customers typically protect between multiple flash arrays. An interesting variant of this use-case is where customers are deploying VPLEX Metro within the data center and are using cross array availability as a means to protect across two entirely different failure domains within the same data center (e.g. two fire cells, two different floors). One note of caution for customers: Since the protection mirrors in this case (RAID-1s) are synchronous mirrors, for flash array customers especially, it is worth remembering that the latency of your I/O will be the slowest array that is a part of the R1 volume map. Given this, it is beneficial for the arrays forming the R1 legs to have similar latency characteristics. In the case of all flash arrays, this means that the typical flash R1, all legs should be all flash.

    Non-disruptive Tech Refresh

    Another big use-case that we hear from customers in the all-flash-array space is future flexibility. A lot of the all flash array platforms are continuing to evolve rapidly with newer versions being available sooner rather than later. This has implied that customers have felt the need to future proof themselves and enable migrations to those newer platforms more seamlessly. VPLEX because of its ability to migrate non-disruptively (Does VPLEX do migrations?) becomes a logical choice for customers looking for this option. The same also applies for customers who anticipate their future flash needs growing. VPLEX provides a means to present and aggregate flash storage.

    Long distance DR protection with RecoverPoint

    Here is a two-fer. With the RecoverPoint splitter in-built into the VPLEX platform, it can be used for DR protection for XtremIO. Given the heterogeneous support of VPLEX and RecoverPoint, you can copy the DR protection leg to any EMC or non-EMC array. RecoverPoint will also give you continuous data protection in addition to the continuous remote replication. This means that a combination of VPLEX and RecoverPoint will give you HA and DR in combination with XtremIO.

    On boarding

    Another form of migration – this one to move data onto the flash array. Too often customers have more than one array type and are looking to move a portion of their workload from those disparate array onto XtremIO. Traditionally, this would mean figuring out a way to copy the data over (which is likely different between every combination of arrays). Putting VPLEX in front of the non flash arrays as well as XtremIO, will enable a seamless and a uniform migration experience between the source array (any one of the 45+ EMC and non-EMC arrays that are supported by VPLEX) and XtremIO. By the way, if you are moving lock stock and barrel to XtremIO from existing arrays, you can use the free 180 day VPLEX Migration license (described here) available with the purchase of a new EMC array.

    Heterogeneous Host connectivity

    Another relative freebie – While I do not foresee this as being the primary reason to put a VPLEX in front of an XtremIO, because of the vast host side interoperability that has been built over the years with VPLEX, you get host connectivity for all the hosts supported by VPLEX in VPLEX Support Matrix (AIX and HPUX anyone?). I am sure over time XtremIO will build this support natively. Until then, this can tide you over if you happen to have hosts and clustering infrastructure that is not on the XtremIO support matrix.
    Josh Goldstein, VP of Product Management/Marketing for the XtremIO team does an exceptional job describing the interplay between XtremIO and VPLEX here:

    Questions we have seen thus far

    What latency does VPLEX add to my all flash workloads?

    The most direct answer to this question is that it depends.
    However, the real question behind the question is ‘Am I lose the benefit of the latency reduction I got from my purchase of XtremIO?’. Again, the straight answer is that VPLEX will add latency to the mix. So, the combination of VPLEX and XtremIO will, for most workloads (non read cache hit intensive) have higher latency than XtremIO alone. So, if you have workloads that need the absolute latency of XtremIO, then you should direct connect to XtremIO. However, these workloads are far and few in between. If you are a typical customer with a typical workload, the more appropriate compare is the latency incurred with a non-all-flash-arrays. Here, for most workloads, VPLEX + XtremIO will come out ahead in terms of total latency. Now, the real answer will depend on the latency that your application needs, your workload mix and the use-cases that are important to you from the list above. And from there, it becomes conversation about the relative priorities between them which will help you understand which workloads are suited for the VPLEX/XtremIO combination.
    As we get more questions, I will post them to this blog. If you are a VPLEX/XtremIO customer, we would love to hear from you!