Tag Archives: exchange

Does EMC FASTCache work with Exchange?

Posted on by

Short Answer: Yes!

In my dealings with customers I’ve been requesting performance data from their storage systems whenever I can to see how different applications and environments react to new features. Today I’m going to give you some more real-world data, straight from a customer’s production EMC NS480.

I’ve pulled various stats out of Analyzer for this customer’s Exchange server, which has 3 mail databases totaling about 1TB of mail stored on the NS480 via FibreChannel connect. Since this customer is not extremely large (similar to most of our customers) they are using this NS480 for pretty much everything from VMWare, SQL, and Exchange, to NAS, web/app content, and Business Intelligence systems. There is about 30TB of block data and another 100TB of NAS data. FASTCache is enabled for all LUNs and Pools with just 183GB of usable FASTCache space (4 x 100GB SSDs). So in this environment, with a modest amount of FASTCache and very mixed workload, how does Exchange fare?

Let’s first take a look at the Exchange workload itself for a 24 hour period: (Note: There were no reads from the Exchange log LUNs to speak of so I left that out of this analysis.)

Total Read IOPS for the 3 databases: (the largest peak is a result of database maintenance jobs and the smaller peaks are due to backup jobs) Here it’s tough to see due to the maintenance and backup peaks, but production IO during the work day is about 200-400IOPS. By the way, a source-deduplicating incremental-forever backup technology, such as Avamar, could drastically reduce the IO Load and duration of the nightly backup

Total Write IOPS for the 3 databases: Obviously more changes to the database occurring during the work day.

Total Write IOPS for the 3 Log files: Log data is typically cached easily in the SP cache so FAST Cache isn’t terribly required here but I’m including it to show whether there is any value to using FASTCache with Exchange logs.

Now let’s look at the FASTCache hit ratios for this same set of data: (average of all 3 DBs)

First, the Read Activity: Here you can see that aside from the maintenance and backup jobs, FASTCache is servicing 70-90% of the Read IOPs. Keep in mind that a FASTCache miss could still be a Cache Hit if the data is in SP Cache. What’s interesting about this is that it looks like the nightly maintenance job is pushing the highest load.

And the Write Activity: The beauty of EMC’s FASTCache implementation being a read/write cache, the benefit extends beyond just read IO. Here you see that FASTCache is servicing 60-80% of the writes for these Exchange Databases. That’s a huge load off the backend disks.

And the Log Writes: Since Log writes are usually not a performance problem, I would say that FASTCache is not necessary here, and the average 30% hit ratio shown here is not great. If you wanted to spend the time to tune FASTCache a bit, you might consider disabling FASTCache for Log LUNs to devote the FASTCache capacity to more cache friendly workloads.

All in all you can see that for the database data, FASTCache is servicing a significant portion of the user generated workload, reducing the backend disk load and improving overall performance.

Hopefully this gives you a sense of what FASTCache could do for your Exchange environment, reducing backend disk workload for reads AND writes. I must reiterate, since an SP Cache hit is shown as a FASTCache miss, an 80% FASTCache hit ratio does not mean that 20% of the IOs are hitting disk. To illustrate this, I’ve graphed the sum of SP Cache Hits and FAST Cache Hits for a single database. You can see that in many cases we’re hitting a total of 100% cache hits.

Most interesting is the backup window where SP Cache is really handling a huge amount of the load. This is actually due to the Prefetch algorithms kicking in for the sequential read profile of a backup, something CX/VNX is very good at.

EMC’s New VNX Unified Storage Systems

Posted on by

Today, EMC announced the new VNX and VNXe Unified Storage platforms that merge the functionality of, and replaces, EMC’s popular Clariion and Celerra products.   VNX is faster, more scalable, more efficient, more flexible, and easier to manage than the platforms it replaces.

Key differences between CX4/NS and VNX:

  • VNX replaces the 4gb FC-Arbitrated Loop backend busses with 6gb SAS point-to-point switched backend.
    • Fast and Reliable
  • VNX supports both 3.5” and 2.5” SAS drives in EFD (SSD), SAS, and NearLine-SAS varieties.
    • Flexible and Efficient
  • VNX has more cache, more front-end ports, and faster CPUs
    • Fast and Flexible
  • VNX systems can manage larger FASTCache configurations.
    • Fast and Efficient
  • VNX builds on the management simplicity enhancements started in EMC Unisphere on CX4/NS by adding application aware provisioning.
    • Simple and Efficient
  • VNX allows you to start with Block-only or NAS-only and upgrade to Unified later if desired, or start with Unified at deployment.
    • Cost Effective and Flexible
  • VNX will support advanced data services like deduplication in addition to FASTVP, FASTCache, Block QoS, Compression, and other features already available in Clariion and Celerra.
    • Flexible and Efficient

Just as with every manufacturer, newer products take advantage of the latest technologies (faster Intel processors and SAS connectivity in this case,) but that’s only part of the story with VNX.

Earlier, I mentioned Application Aware Provisioning has been added to Unisphere:

Prior to Application Aware Unisphere, if tasked with provisioning storage for Microsoft Exchange (for example), a storage admin would take the mailbox count and size requirements, use best practices and formulas from Microsoft for calculating required IOPS, and then map that data to the storage vendors’ best practices to determine the best disk layout (RAID Type, Size, Speed, quantity, etc).  After all that was done, then the actual provisioning of RAID Groups and/or LUNs would be done.

Now with Application Aware Unisphere, the storage admin simply enters the mailbox count and size requirements into Unisphere and the rest is done automatically.  EMC has embedded the best practices from Microsoft, VMWare, and EMC into Unisphere and created simple wizards for provisioning Hyper-V, VMWare, NAS, and Microsoft Exchange storage using those best practices.

Combine Unisphere’s Application Aware Provisioning with the already included vCenter integration, and support for VMWare VAAI and you have a broad set of integration from the application layer down through to the storage system for optimum performance, simple and efficient provisioning, and unparalleled visibility.  This is especially useful for small to medium sized businesses with small IT departments.

EMC has also simplified licensing of advanced features on VNX.  Rather than licensing individual software products based on the exact features you want, VNX has 5 simple Feature Packs plus a few bundle packs.  The packs are created based on the overall purpose rather than the feature.  ie: Local Protection vs. Snapshots or Clones

  • FAST Suite includes FASTVP, FASTCache, Block QoS, and Unisphere Analyzer
  • Security and Compliance Pack includes File Level Retention for File and Block Encryption
  • Local Protection Pack includes Snapshots for block and file, full copy clones, and RecoverPoint/CDP
  • Remote Protection Pack includes Synchronous and Asynchronous replication for block and file as well as RecoverPoint/CRR for near-CDP remote replication of block and.or file data.
  • Application Protection Pack extends the application integration by adding Replication Manager for application integrated replication and Data Protection Advisor for SLA based replication monitoring and reporting.

You can also get the Total Protection Pack which includes Local Protection, Remote Protection, and Application Protection packs at a discounted cost or the Total Efficiency Pack which includes all five.  That’s it, there are no other software options for VNX/VNXe.  Compression and Deduplication are included in the base unit as well as SANCopy.  You will also find that the cost of these packs is extremely compelling once you talk with your EMC rep or favorite VAR.

So there you have it — powerful, simple and efficient storage, unified management, extensive data protection features, simplified licensing, and class leading functionality (FASTVP, FASTCache, Integrated CDP, Quality of Service for Block, etc) in a single platform.  That’s Unified, That’s EMC VNX.

I didn’t have time to touch on VNXe here but there is even more cool stuff going on there.  You can read more about these products here..

Compression better than Dedup? NetApp Confirms!

Posted on by

The more I talk with customers, the more I find that the technical details of how something works is much less important than the business outcome it achieves.  When it comes to storage, most customers just want a device that will provide the capacity and performance they need, at a price they can afford–and it better not be too complicated.  Pretty much any vendor trying to sell something will attempt to make their solution fit your needs even if they really don’t have the right products.  It’s a fact of life, sell what you have.  Along these lines, there has been a lot of back and forth between vendors about dedup vs. compression technology and which one solves customer problems best.

After snapshots and thin provisioning, data reduction technology in storage arrays has become a big focus in storage efficiency lately; and there are two primary methods of data reduction — compression and deduplication.

While EMC has been marketing compression technology for block and file data in Celerra, Unified, and Clariion storage systems, NetApp has been marketing deduplication as the technology of choice for block and file storage savings.  But which one is the best choice?  The short answer is.. it depends.  Some data types benefit most from deduplication while others get better savings with compression.

Currently, EMC supports file compression on all EMC Celerra NS20, 40, 80, 120, 480, 960, VG2, and VG8 systems running DART 5.6.47.x+ and block compression on all CX4 based arrays running FLARE30.x+.  In all cases, compression is enabled on a volume/LUN level with a simple check box and processing can be paused, resumed, and disabled completely, uncompressing the data if desired.  Data is compressed out-of-band and has no impact on writes, with minimal overhead on reads.  Any or all LUN(s) and/or Filesystem(s) can be compressed if desired even if they existed prior to upgrading the array to newer code levels.

With the release of OnTap 8.0.1, NetApp has added support for in-line compression within their FAS arrays.  It is enabled per-FlexVol and as far as I have been able to determine, cannot be disabled later (I’m sure Vaughn or another NetApp representative will correct me if I’m wrong here.)  Compression requires 64-bit aggregates which are new in OnTap 8, so FlexVols that existed prior to an upgrade to 8.x cannot be compressed without a data migration which could be disruptive.  Since compression is inline, it creates overhead in the FAS controller and could impact performance of reads and writes to the data.

Vaughn Stewart, of NetApp, expertly blogged today about the new compression feature, including some of the caveats involved, and to me the most interesting part of the post was the following graphic he included showing the space savings of compression vs. dedup for various data types.

Image Credit: Vaughn Stewart, NetApp

The first thing that struck me was how much better compression performed over deduplication for all but one data type (Virtualization will usually fare well because in a typical environment there are many VMs with the same operating system files).  In fact, according to NetApp, deduplication achieves very little savings, if any, for the majority of the data types here.
 
The light green bar indicates savings with both dedupe AND compression enabled on the same dataset.  In 5 out of 9 cases, dedup adds ZERO savings over compression alone.  I can’t help but wonder why anyone would enable dedup on those data types if they already had compression, since both features use storage array CPU resources to find and compress or dedup data.  I am aware that in some cases, dedup can improve performance on NetApp systems due to dedup-aware cache, but I also believe that any performance gain is directly related to the amount of duplication in the data.  Using this chart, virtualization is really the only place where dedup seems particularly effective and hence the only place where real performance gains would likely present themselves.
 
The challenge for NetApp customers will be getting their data into a configuration that supports compression due to the 64-bit aggregate requirement, lack of an easy and non-disruptive LUN migration feature (DataMotion appears to only support iSCSI and NFS and requires several additional licenses), and no way to convert an aggregate from 32-bit to 64-bit.  Once compression has been enabled, if there is truly no way to disable it, any resulting performance impact will be very difficult to rectify.
 
On the other hand, any EMC customer with current maintenance can upgrade their NS or CX4 array to newer versions of DART or FLARE, and compression can be enabled on any existing data after the fact.  If performance becomes an issue for a particular dataset once compressed, the data can be uncompressed later.  Both operations are completely non-disruptive and run in the background.  While block compression only works on LUNs in a virtual pool, as opposed to a traditional RAID group, enabling compression on a normal LUN will automatically migrate the LUN into a virtual pool, perform zero-page reclaim, followed by compression, and the entire process is completely non-disruptive to the application.  Oh, and compressed data can still be tiered with FASTVP across SSD, FC, and SATA disk and/or benefit from up to 2TB of FASTCache.
 
I admit that there is a place for deduplication as well as compression in reducing the footprint of customer data.  However, based on what I’ve seen in my career as an IT professional, and with my customers in my current role at EMC, there are more use cases for compression than there are for deduplication when it comes to primary data, whether SAN or NAS.  Either way, if I was using a new technology for the first time on a particular data set, whether compression or deduplication, I would definitely want a backout plan in case the drawbacks outweight the benefits.

EMC Unified: Guaranteed Efficiency with Better Application Availability

Posted on by

(Warning: This is a long post…)

You have a critical application that you can’t afford to lose:

So you want to replicate your critical applications because they are, well, critical.   And you are looking at the top midrange storage vendors for a solution.  NetApp touts awesome efficiency, awesome snapshots, etc while EMC is throwing considerable weight behind it’s 20% Efficiency Guarantee.  While EMC guarantees to be 20% more efficient in any unified storage solution, there is perhaps no better scenario than a replication solution to prove it.

I’m going to describe a real-world scenario using Microsoft Exchange as the example application and show why the EMC Unified platform requires less storage, and less WAN bandwidth for replication, while maintaining the same or better application availability vs. a NetApp FAS solution.  The example will use a single Microsoft Exchange 2007 SP2 server with ten 100GB mail databases connected via FibreChannel to the storage array.  A second storage array exists in a remote site connected via IP to the primary site and a standby Exchange server is attached to that array.

Basic Assumptions:

  • 100GB per database, 1 database per storage group, 1 storage group per LUN, 130GB LUNs
  • 50GB Log LUNs, ensure enough space for extra log creation during maintenance, etc
  • 10% change rate per day average
  • Nightly backup truncates logs as required
  • Best Practices followed by all vendors
  • 1500 users (Heavy Users 0.4IOPS), 10% of users leverage Blackberry (BES Server = 4X IOPS per user)
  • Approximate IOPS requirement for Exchange: 780IOPS for this server.
  • EMC Solution: 2 x EMC Unified Storage systems with SnapView/SANCopy and Replication Manager
  • NetApp Solution: 2 x NetApp FAS Storage systems with SnapMirror and SnapManager for Exchange
  • RPO: 4 hours (remote site replication update frequency)

Based on those assumptions we have 10 x 130GB DB LUNs and 10 x 50GB Log LUNs and we need approximately 780 host IOPS 50/50 read/write from the backend storage array.

Disk IOPS calculation: (50/50 read/write)

  • RAID10, 780 host IOPS translates to 1170 disk IOPS (r+w*2)
  • RAID5, 780 host IOPS translates to 1950 disk IOPS (r+w*4)
  • RAIDDP is essentially RAID6 so we have about 2730 disk IOPS (r + w*6)

Note: NetApp can create sequential stripes on writes to improve write performance for RAIDDP but that advantage drops significantly as the volumes fill up and free space becomes fragmented which is extremely likely to happen after a few months or less of activity.

Assuming 15K FiberChannel drives can make 180 IOPS with reasonable latencies for a database we’d need:

  • RAID10, Database 6.5 disks (round up to 8), using 450GB 15K drives =  1.7TB usable (1 x 4+4)
  • RAID5, 10.8 disks for RAID5 (round up to 12), using 300GB 15K drives = 2.8TB usable (2 x 5+1)
  • RAID6/DP, 15.1 disks for RAID6 (round up to 16), using 300GB 15K drives = 3.9TB usable (1 x 14+2)

Log writes are highly cachable so we generally need fewer disks; for both the RAID10 and RAID5 EMC options we’ll use a single RAID1 1+1 raid group with 2 x 600GB 15K drives.  Since we can’t do RAID1 or RAID10 on NetApp we’ll have to use at least 3 disks (1 data and 2 parity) for the 500GB worth of Log LUNs but we’ll actually need more than that.

Picking a RAID Configuration and Sizing for snapshots:

For EMC, the RAID10 solution uses fewer disks and provides the most appropriate amount of disk space for LUNs vs. the RAID5 solution.  With the NetApp solution there really isn’t another alternative so we’ll stick with the 16 disk RAID-DP config.  We have loads of free space but we need some of that for snapshots which we’ll see next.  We also need to allocate more space to the Log disks for those snapshots.

Since we expect about 10% change per day in the databases (about 10GB per database) we’ll double that to be safe and plan for 20GB of changes per day per LUN (DB and Log).

NetApp arrays store snapshot data in the same volume (FlexVol) as the application data/LUN so you need to size the FlexVol’s and Aggregates appropriately.  We need 200GB for the DB LUNs and 200GB for the Log LUNs to cover our daily change rate but we’re doubling that to 400GB each to cover our 2 day contingency.  In the case of the DB LUNs the aggregate has more than enough space for the 400GB of snapshot data we are planning for but we need to add 400GB to the Log aggregate as well so we need 4 x 600GB 15K drives to cover the Exchange logs and snapshot data.

EMC Unified arrays store snapshot data for all LUNs in centralized location called the Reserve LUN Pool or RLP.  The RLP actually consists of a number of LUNs that can be used and released as needed by snapshot operations occurring across the entire array.  The RLP LUNs can be created on any number of disks, using any RAID type to handle various IO loads and sizing an RLP is based on the total change rate of all simultaneously active snapshots across the array.  Since we need 400GB of space in the Reserve LUN Pool for one day of changes, we’ll again be safe by doubling that to 800GB which we’ll provide with 6 dedicated 300GB 15K drives in RAID10.

At this point we have 20 disks on the NetApp array and 16 disks on the EMC array.  We have loads of free space in the primary database aggregate on the NetApp but we can’t use that free space because it’s sized for the IOPS workload we expect from the Exchange server.

In order to replicate this data to an alternate site, we’ll configure the appropriate tools.

EMC:

  1. Install Replication Manager on a server and deploy an agent to each Exchange server
  2. Configure SANCopy connectivity between the two arrays over the IP ports built-in to each array
  3. In Replication Manager, Configure a job that quiesces Exchange, then uses SANCopy to incrementally update a copy of the database and log LUNs on the remote array and schedule for every 4 hours using RM’s built in scheduler.

NetApp:

  1. Install SnapManager for Exchange on each Exchange server
  2. Configure SnapMirror connectivity betweeen the two arrays over the IP ports built-in to each array
  3. In SnapManager, Configure a backup job that quiesces Exchange and takes a Snapshot of the Exchange DBs and Logs, then starts a SnapMirror session to replicate the updated FlexVol (including the snapshot) to the remote array.  Configure a schedule in Windows Task Manager to run the backup job every 4 hours.

Both the EMC and NetApp solutions run on schedule, create remote copies, and everything runs fine, until...

Tuesday night during the weekly maintenance window, the Exchange admins decide to migrate half of the users from DB1, to DB2 and DB3 and half of the users from DB4, to DB5 and DB6.  About 80GB of data is moved (25GB to each of the target DBs.)  The transactions logs on DB1 and DB4 jump to almost 50GB, 35GB each on DB2, DB3, DB5, and DB6.

On the NetApp array, the 50GB log LUNs already have about 10GB of snapshot data stored and as the migration is happening, new snapshot data is tracked on all 6 of the affected DB and Log LUNs.  The 25GB of new data plus the 10GB of existing data exceeds the 20GB of free space in the FlexVol that each LUN is contained in and guess what…  Exchange chokes because it can no longer write to the LUNs.

There are workarounds: First, you enable automatic volume expansion for the FlexVols and automatic Snapshot deletion as a secondary fallback.  In the above scenario, the 6 affected FlexVols autoextend to approximately 100GB each equaling 300GB of snapshot data for those 6 LUNs and another 40GB for the remaining 4 LUNs.  There is only 60GB free in the aggregate for any additional snapshot data across all 10 LUNs.  Now, SnapMirror struggles to update the 1200GB of new data (application data + snapshot data) across the WAN link and as it falls behind more data changes on the production LUNs increasing the amount of snapshot data and the aggregate runs out of space.  By default, SnapMirror snapshots are not included in the “automatically delete snapshots” option so Exchange goes down.  You can set a flag to allow SnapMirror owned snapshots to be automatically deleted but then you have to resync the databases from scratch.  In order to prevent this problem from ever occurring, you need to size the aggregate to handle >100% change meaning more disks.

Consider how the EMC array handles this same scenario using SANCopy.  The same changes occur to the databases and approximately 600GB of data is changed across 12 LUNs (6 DB and 6 Log).  When the Replication Manager job starts, SANCopy takes a new snapshot of all of the blocks that just changed for purposes of the current update and begins to copy those changed blocks across the WAN.

EMC Advantages:

  • SANCopy/Inc is not tracking the changes that occur AS they occur, only while an update is in process so the Reserve LUN Pool is actually empty before the update job starts.  If you want additional snapshots on top of the ones used for replication, that will increase the amount of data in the Reserve LUN Pool for tracking changes, but snapshots are created on both arrays independently and the snapshot data is NOT replicated.  This nuance allows you to have different snapshot schedules in production vs. disaster recovery for example.
  • Because SANCopy/Inc only replicates the blocks that have changed on the production LUNs, NOT the snapshot data, it copies only half of the data across the WAN vs SnapMirror which reduces the time out of sync.  This translates to lower WAN utilization AND a better RPO.
  • IF an update was occurring when the maintenance took place, the amount of data put in the Reserve LUN pool would be approximately 600GB (leaving 200GB free for more changed data).  More efficient use of the Snapshot pool and more flexibility.
  • IF the Reserve LUN Pool ran out of space, the SANCopy update would fail but the production LUNs ARE NEVER AFFECTED.  Higher availability for the critical application that you devoted time and money to replicate.
  • Less spinning disk on the EMC array vs. the NetApp.

EMC has several replication products available that each act differently.  I used SANCopy because, combined with Replication Manager, it provides similar functionality to NetApp SnapMirror and SnapManager.  MirrorView/Async has the same advantages as SANCopy/Incremental in these scenarios and can replicate Exchange, SQL, and other applications without any host involvement.

Higher Application availability, lower WAN Utilization , Better RPO, Fewer Spinning Disks, without even leveraging advanced features for even better efficiency and performance.

Resiliency vs Redundancy: Using VPLEX for SQL HA

Posted on by

A little history on my philosophy around high-availability

Around the year 2000, when I was working in network operations for a large wireless telco, a very senior network architect explained to me the company’s philosophy on building high availability solutions into the network.  The phrase I remember from that conversation was “we don’t build redundant networks, we build resilient networks..” The difference is that while redundant networks failover to secondary paths to resume traffic, resilient networks don’t go down at all.  This concept has stuck with me ever since and I tend to tackle high-availability problems of all kinds with this idea in mind.  It’s frankly been very difficult to build solutions that are resilient across the entire stack, mostly because infrastructure technology hasn’t quite gotten there yet.

Things may have changed…

I recently had a meeting with a customer to discuss local high availability for SQL.  This customer has a very large multi-node clustered SQL environment (hundreds of TBs of data, hundreds of databases, hundreds of instances, many clusters, many nodes per cluster) and has been testing SQL database mirroring as an alternative to traditional Windows Failover Clustering.  The focus of the meeting wound up focused primarily on leveraging VPLEX as an alternative to SQL mirroring, and the reasons for that decision suddenly reminded me of the Resiliency vs Redundancy discussion I had years ago.  A VPLEX solution potentially solves the same problem as DB mirroring, does it with less complexity, and less risk.

VPLEX Local as a Resilient HA solution

One of the many features of VPLEX is it’s ability to mirror data across multiple storage arrays and present that mirror as a single LUN to the host.  For customers already running large multi-node MSCS clusters, the LUN appears just like any normal storage LUN and Windows/SQL treat the LUN normally.  There are several reasons VPLEX should be considered as an alternative to database mirroring. (much of this applies to Exchange CCR as well)

VPLEX hardware is inherently Resilient.  A VPLEX cluster is an N+1 cluster of loosely coupled nodes, cooperating with each other, but not depending on each other.  Hosts can access any of the hosted data, through any of the ports, on any of the cluster nodes.  If a node fails for any reason, the remaining nodes continue serving IO for any data.  Except for a dead path on the host side (managed by PowerPath or MPIO), there is no failover process, and no cache mirroring to worry about.  The potential performance impact of a failure is equal to 1, divided by the quantity of that component in the cluster. (128 x 8gbe ports across 8 director nodes for a large VPLEX Local cluster)

In addition, because VPLEX utilizes a write-through cache, there is never any dirty cache data (data in cache that has not been committed to disk) in a VPLEX system.  A power outage or VPLEX hardware failure does not put data at risk.

Other Advantages of using VPLEX over SQL Database Mirroring

Improved Performance:

  • Compared with SQL Database mirroring, VPLEX mirroring has significantly less impact on transaction performance for writes and can improve transaction performance in some cases due to the large read cache in the VPLEX directors. (Note: I am comparing to DB Mirroring in Full-Safety mode since the customer’s requirement was a zero-data-loss solution.)

Non-Disruptive Storage Failover:

  • In the event of a storage failure, SQL Mirroring must perform a cluster node failover which takes a few seconds at best, possibly disrupting applications.  VPLEX provides completely non-disruptive failover when a storage failure occurs.  (A server hardware failure still triggers a node failover as it would in any other failover clustering scenario.)

Less Management Overhead:

  • From a management perspective, using VPLEX instead of SQL Database mirroring gives the SQL DBAs fewer SQL instances and fewer moving parts to manage on a daily basis.  The storage team just presents a mirrored LUN from VPLEX to the cluster and it’s business as usual for the DBAs.
  • VPLEX also allows the storage team to non-disruptively migrate data between storage arrays behind VPLEX to balance load, perform hardware refreshes, resolve capacity problems.  VPLEX performs the migration at the direction of the storage admins.

Reduced Risk:

  • Reducing management complexity also reduces risk.  With a high number of database instances and db mirrors involved in a large environment like this one, the chance of one of those mirrors having a problem, or being configured incorrectly, is increased.  DBAs can rely on VPLEX mirroring all of the data, 24x7x365, even when host maintenance is being performed.

Reduced Cost:

  • When compared with the SQL Database Mirroring solution, the VPLEX solution reduced the number of physical servers needed in this environment, reducing cost enough to more than offset the cost of VPLEX itself.  Combined with reductions in soft costs, like reduced DBA management overhead, VPLEX will actually save them quite a bit of money, and increased uptime during storage refresh and maintenance will increase revenues in this case as well.

A Distributed Future:

  • Next year, when a second datacenter is online nearby, the first VPLEX Local cluster can be connected to another VPLEX cluster in the new datacenter.  Then the SQL cluster nodes and data can be distributed across both datacenters, providing protection from entire datacenter outages, or solving space constraints with no changes to the application or servers, and no downtime.

I wonder how many other customers would like to build more resilient infrastructures?

If you combine a VPLEX solution with a true cluster file system and an active-active database engine (ie: Oracle RAC), you can eliminate the disruption caused by server hardware failures.  It’s just a matter of time now until the entire stack can be designed for true resiliency with very little management overhead.  I can’t wait to see what happens.

The following EMC White Paper has a lot of good information about using VPLEX in this same context:

Workload Resiliency with EMC VPLEX

While EMC users benefit from Replication Manager, NetApp users NEED SnapManager

Posted on by

This is a follow up to my recent post NetApp and EMC: Replication Management Tools Comparison, in which I discussed the differences between EMC Replication Manager and NetApp SnapManager.

————

As a former customer of both NetApp and EMC, and now as an employee of EMC, I noticed a big difference between NetApp and EMC as far as marketing their replication management tools. As a customer, EMC talked about Replication Manager several times and we purchased it and deployed it. NetApp made SnapManager a very central part of their sales campaign, sometimes skipping any discussion of the underlying storage in favor of showing off SnapManager functionality. This is an extremely effective sales technique and NetApp sales teams are so good at this that many people don’t even realize that other vendors have similar, and in my opinion EMC has better, functionality.  One of the reasons for this difference in marketing strategy is that NetApp users NEED SnapManager, while EMC users do not always need Replication Manager.

The reason why is both simple and complex…

EMC storage arrays (Clariion, Symmetrix, RecoverPoint, Invista) all have one technology in common that NetApp Filers do not–Consistency Groups. A consistency group allows the storage system to take a snapshot of multiple LUNs simultaneously, so simultaneous in fact that all of the snapshots are at the exact same point in time down to the individual write. This means that, without taking any applications offline and without any orchestration software, EMC storage arrays can create crash-consistent copies of nearly any kind of data at any time.

The EMC Whitepaper “EMC CLARiiON Database Storage Solutions: Oracle 10g/11g with CLARiiON Storage Replication Consistency” downloadable from EMC’s website has the following explanation of consistency groups in general…

“…Consistent replication operates on multiple LUNs as a set such that if the replication action fails for one member in the set, replication for all other members of the set are canceled or stopped.  Thus the contents of all replicated LUNs in the set are guaranteed to be identical point-in-time replicas of their source and dependent-write consistency is maintained…”

“…With consistent replication, the database does not have to be shut down or put into “hot backup mode.”  Replicates created with SnapView or MV/S (or MV/A, Timefinder, SRDF, Recoverpoint, etc) consistency operations, without first quiescing or halting the application, are restartable point-in-time replicas of the production data and guaranteed to be dependent-write consistent.”

Consistency is important for any application that is writing to multiple LUNs at the same time such as SQL database and log volumes. SnapManager and Replication Manager actually prepare the application by quiescing the database during the snapshot creation process. This process creates “application-consistent” copies which are technically better for recovery compared with “storage-consistent” copies (also known as crash-consistent copies).

So, while I will acknowledge that quiescing the database during a snapshot/replication operation provides the best possible recovery image, that may not be realistic in some scenarios.  The first issue is that the actual operation of quiescing, snapping, checking the image, then pushing an update to a remote storage array takes some time.  Depending on the size of the dataset, this operation can take from several minutes to several hours to complete.  If you have a Recovery Point Objective (RPO) of 5 minutes or less, using either of these tools is pretty much a non-starter.

Another issue is one of application support.  EMC Replication Manager and NetApp SnapManager have very wide support for the most popular operating systems, filesystems, databases, and applications, they certainly don’t support every application.  A very simple example is a Novell Netware file server with a NSS pool/volume spanning multiple LUNs.  Neither NetApp nor EMC have support for Novell Netware in their replication management tools.  While you can certainly replicate all of the LUNs with NetApp SnapManager, SnapManager has no consistency technology built-in to keep the LUNs write-order consistent.  The secondary copy will appear completely corrupt to the Netware server if a recovery is attempted.  Through the use of consistency groups with MirrorView/Async, the replication of each LUN is tracked as a group and all of the LUNs are write-order consistent with each other, keeping the filesystem itself consistent.  You would need to have either array-level consistency technology, or support for Netware in the replication management tool in order to replication such a server..  Unfortunately, NetApp provides neither.

You may have complex applications that consist of Oracle and SQL databases, NTFS filesystems, and application servers running as VMs.  Using array-based consistency groups, you can replicate all of these components simultaneously and keep them all consistent with each other.  This way you won’t have transactions that normally affect two databases end up missing in one of the two after a recovery operation, even if those databases are different technologies (Oracle and MySQL, or PostgreSQL for example).

EMC Storage arrays provide consistency group technology for Snapshots and Replication in Clariion and Symmetrix storage arrays.  In fact, with Symmetrix, consistency groups can span multiple arrays without any host software.  By comparison, NetApp Filers do not have consistency group technology in the array.  Snapshots are taken (for local replicas and for SnapMirror) at the FlexVolume level.  Two FlexVolumes cannot be snapped consistently with each other without SnapManager.

There are a couple workarounds for NetApp users–you can snapshot an aggregate, but that is not recommended by NetApp for most customers, or you can put multiple LUNs in the same FlexVol, but that still limits you to 16TB of data including snapshot reserve space, and both options violate best practices for database designs of keeping data and logs in separate spindles for recovery.  Even with these workarounds, you cannot gain LUN consistency across the two controllers in an HA Filer pair, something the CLARiiON does natively, and can help for load balancing IO across the storage processors.

In general, I recommend that EMC customers use EMC Replication Manager and NetApp customers use SnapManager for the applications that are supported, and for most scenarios.  But when RPO’s are short, or the environment falls outside the support matrix for those tools, consistency groups become the best or only option.

Incidentally, with EMC RecoverPoint, you get the best of both worlds.  CDP or near-CDP replication of data using consistency groups for zero or near-zero RPOs plus application-consistent bookmarks made anytime the database is quiesced.  Recovery is done from the up-to-the-second version of the data, but if that data is not good for any reason, you can roll back to another point in time, including a point-in-time when the database was quiesced (a bookmark).

So, while EMC has, in Replication Manager, an equivalent offering to NetApp’s SnapManager, EMC customers are not required to use it, and in some cases they can achieve better results using array-based consistency technologies.

NetApp and EMC: Replication Management Tools Comparison

Posted on by

I started this post before I started working for EMC and got sidetracked with other topics.  Recent discussions I’ve had with people have got me thinking more about orchestration of data protection, replication, and disaster recovery, so it was time to finish this one up…

———————————–

Prior to me coming to work for EMC, I was working on a project to leverage NetApp and EMC storage simultaneously for redundancy.  I had a chance to put various tools from EMC and NetApp into production and have been able to make some observations with respect to some of the differences.  This is a follow up my previous NetApp and EMC posts…

NetApp and EMC: Real world comparisons
NetApp and EMC: Startup and First Impressions
NetApp and EMC: ESX and Exchange 2007 CCR
NetApp and EMC: Exchange 2007 Replication

Specifically this post is a comparison between NetApp SnapManager 5.x and EMC Replication Manager 5.x.  First, here’s a quick background on both tools based on my personal experience using them.

Description

EMC Replication Manager (RM) is a single application that runs on a dedicated “Replication Manager Server.”  RM agents are deployed to the hosts of applications that will be replicated.  RM supports local and remote replication features in EMC’s Clariion storage array, Celerra Unified NAS, Symmetrix DMX/V-Max, Invista, and RecoverPoint products.  With a single interface, Replication Manager lets you schedule, modify, and monitor snapshot, clone, and replication jobs for Exchange, SQL, Oracle, Sharepoint, VMWare, Hyper-V, etc.  RM supports Role-Based authentication so application owners can have access to jobs for their own applications for monitoring and managing replication.  RM can manage jobs across all of the supported applications, array types, and replication technologies simultaneously.  RM is licensed by storage array type and host count. No specific license is required to support the various applications.

NetApp SnapManager is actually a series of applications designed for each application that NetApp supports.  There are versions of SnapManager for Exchange, SQL, Sharepoint, SAP, Oracle, VMWare, and Hyper-V.  The SnapManager application is installed on each host of an application that will be replicated, and jobs are scheduled on each specific host using Windows Task Scheduler.  Each version of SnapManager is licensed by application and host count.  I believe you can also license SnapManager per-array instead of per-host which could make financial sense if you have lots of hosts.

Commonality

EMC Replication Manager and NetApp SnapManager products tackle the same customer problem–provide guaranteed recoverability of an application, in the primary or a secondary datacenter, using array-based replication technologies.  Both products leverage array-based snapshot and replication technology while layering application-consistency intelligence to perform their duties.  In general, they automate local and remote protection of data.  Both applications have extensive CLI support for those that want that.

Differences

  • Deployment
    • EMC RM – Replication Manager is a client-server application installed on a control server.  Agents are deployed to the protected servers.
    • NetApp SM – SnapManager is several applications that are installed directly on the servers that host applications being protected.
  • Job Management
    • EMC RM – All job creation, management, and monitoring is done from the central GUI. Replication Manager has a Java based GUI.
    • NetApp SM – Job creation and monitoring is done via the SnapManager GUI on the server being protected.  SnapManager utilizes an MMC based GUI.
  • Job Scheduling
    • EMC RM – Replication Manager has a central scheduler built-in to the product that runs on the RM Server.  Jobs are initiated and controlled by the RM Server, the agent on the protected server performs necessary tasks as required.
    • NetApp SM – SnapManager jobs are scheduled with Windows Task Scheduler after creation.  The SnapManager GUI creates the initial scheduled task when a job is created through the wizard.  Modifications are made by editing the scheduled task in Windows task scheduler.

So while the tools essentially perform the same function, you can see that there are clear architectural differences, and that’s where the rubber meets the road.  Being a centrally managed client-server application, EMC Replication Manager has advantages for many customers.

Simple Comparison Example: Exchange 2007 CCR cluster
(snapshot and replicate one of the two copies of Exchange data)

With NetApp SnapManager, the application is installed on both cluster nodes, then an administrator must log on to the console on the node that hosts the copy you want to replicate, and create two jobs which run on the same schedule.  Job A is configured to run when the node is the active node, Job B is configured to run when the node is passive.  Due to some of the differences in the settings, I was unable to configure a single job that ran successfully regardless of whether the node was active or passive.  If you want to modify the settings, you either have to edit the command line options in the Scheduled Task, or create a new job from scratch and delete the old one.

With EMC Replication Manager, you deploy the agent to both cluster nodes, then in the RM GUI, create a job against the cluster virtual name, not the individual node.  You define which server you want the job to run on in the cluster, and whether the job should run when the node is passive, active, or both.  All logs, monitoring, and scheduling is done in the same RM GUI, even if you have 50 Exchange clusters, or SQL and Oracle for that matter.  Modifying the job is done by right-clicking on the job and editing the properties.  Modifying the schedule is done in the same way.

So as the number of servers and clusters increases in your environment, having a central UI to manage and monitor all jobs across the enterprise really helps.  But here’s where having a centrally managed application really shines…

But what if it gets complicated?

Let’s say you have a multi-tier application like IBM FileNet, EMC Documentum, or OpenText and you need to replicate multiple servers, multiple databases, and multiple file systems that are all related to that single application.  Not only does EMC Replication Manager support SQL and Filesystems in the same GUI, you can tie the jobs together and make them dependent on each other for both failure reporting and scheduling.  For example, you can snapshot a database and a filesystem, then replicate both of them without worrying about how long the first job takes to complete.  Jobs can start other jobs on completely independent systems as necessary.

Without this job dependence functionality, you’d generally have to create scheduled tasks on each server and have dependent jobs start with a delay that is long enough to allow the first job to complete while as short as possible to prevent the two parts of the application from getting too far out of sync.  Some times the first job takes longer than usual causing subsequent jobs to complete incorrectly.  This is where Replication Manager shows it’s muscle with it’s ability to orchestrate complex data protection strategies, across the entire enterprise, with your choice of protection technologies (CDP, Snapshot, Clone, Bulk Copy, Async, Sync) from a single central user interface.

You don’t need a Backup solution!

Posted on by

Well, not exactly.  What you really need is a restore solution!

I was discussing this with a colleague recently as we compared difficulties multiple customers are having with backups in general.  My colleague was relating a discussion he had with his customer where he told them, “stop thinking about how to design a backup solution, and start thinking about how to design a restore solution!”

Most of our customers are in the same boat, they work really hard to make sure that their data is backed up within some window of time, and offsite as soon as possible in order to ensure protection in the event of a catastrophic failure.  What I’ve noticed in my previous positions in IT and more so now as a technical consultant with EMC is that (in my experience) most people don’t really think about how that data is going to get restored when it is needed.  There are a few reasons for this:

  • Backing up data is the prerequisite for a restore; IT professionals need to get backups done, regardless of whether they need to restore the data.  It’s difficult to plan for theoretical needs and restore is still viewed, incorrectly, as theoretical.
  • Backup throughput and duration is easily measured on a daily basis, restores occur much more rarely and are not normally reported on.
  • Traditional backup has been done largely the same way for a long time and most customers follow the same model of nightly backups (weekly full, daily incremental) to disk and/or tape, shipping tape offsite to Iron Mountain or similar.

I think storage vendors, EMC and NetApp particularly, are very good at pointing out the distinction between a backup solution and a restore solution, where backup vendors are not quite as good at this.  So what is the difference?

When designing a backup solution the following factors are commonly considered:

  • Size of Protected Data – How much data do I have to protect with backup (usually GB or TB)
  • Backup Window – How much time do I have each night to complete the backups (in hours)
  • Backup Throughput – How fast can I move the data from it’s normally location to the backup target
  • Applications – What special applications do I have to integrate with (Exchange, Oracle, VMWare)
  • Retention Policy – How long do I have to hang on to the backups for policy or legal purposes
  • Offsite storage – How do I get the data stored at some other location in case of fire or other disaster

If you look at it from a restore prospective, you might think about the following:

  • How long can I afford to be down after a failure?  Recovery Time Objective (RTO): This will determine the required restore speed.  If all backups are stored offsite, the time to recall a tape or copy data across the WAN affects this as well.
  • How much data can I afford to lose if I have to restore? Recovery Point Objective (RPO):  This will determine how often the backup must occur, and in many cases this is less than 24 hours.
  • Where do I need to restore the application? This will help in determining where to send the data offsite.

Answer these questions first and you may find that a traditional backup solution is not going to fulfill your requirements.  You may need to look at other technologies, like Snapshots, Clones, replication, CDP, etc.  If a backup takes 8 hours, the restore of that data will most likely take at least 8 hours, if not closer to 16 hours.  If you are talking about a highly transactional database, hosting customer facing web sites, and processing millions of dollars per hour, 8 hours of downtime for a restore is going to cost you tens or hundreds of millions of dollars in lost revenue.

Two of my customers have database instances hosted on EMC storage, for example, which are in the 20TB size range.  They’ve each architected a backup solution that can get that 20TB database backed up within their backup window.  The problem is, once that backup completes, they still have to offsite the backup, and replicate it to their DR site across a relatively small WAN link.  They both use compressed database dumps for backup because, from the DBA’s perspective, dumps are the easiest type of backup to restore from, and the compression helps get 20TB of data pushed across 1gbe Ethernet connections to the backup server.  One of the customers is actually backing up all of their data to DataDomain deduplication appliances already; the other is planning to deploy DataDomain.  The problem in both cases is that, if you pre-compress the backup data, you break deduplication, and you get no benefit from the DataDomain appliance vs. traditional disk.  Turning off compression in the dump can’t be done because the backup would take longer than the backup window allows.  The answer here is to step back, think about the problem you are trying to solve–restoring data as quickly as possible in the event of failure–and design for that problem.

How might these customers leverage what they already have, while designing a restore solution to meet their needs?

Since they are already using EMC storage, the first step would be to start taking snapshots and/or clones of the database.  These snapshots can be used for multiple purposes…

  • In the event of database corruption, or other host/filesystem/application level problem, the production volume can be reverted to a snapshot in a matter of minutes regardless of the size of the database (better RTO).  Snapshots can be taken many times a day to reduce the amount of data loss incurred in the event of a restore (better RPO).
  • A snapshot copy of the database can be mounted to a backup server directly and backed up directly to tape or backup disk.  This eliminates the requirement to perform database dumps at all as well as any network bottleneck between the database server and backup server.  Since there is no dump process, and no requirement to pre-compress the data, de-duplication (via DataDomain) can be employed most efficiently.  Using a small 10gbps private network between the backup media servers and DataDomain appliances, in conjunction with DD-BOOST, throughput can be 2.5X faster than with CIFS, NFS, or VTL to the same DataDomain appliance.  And with de-duplication being leveraged, retention can be very long since each day’s backup only adds a small amount of new data to the DataDomain.
  • Now that we’ve improved local restore RTO/RPO, eliminated the backup window entirely for the database server, and decreased the amount of disk required for backup retention, we can replicate the backup to another DataDomain appliance at the DR site.  Since we are taking full advantage of de-duplication now, the replication bandwidth required is greatly reduced and we can offsite the backup data in a much shorter period of time.
  • Next, we give the DBAs back the ability to restore databases easily, and at will, by leveraging EMC Replication Manager.  RM manages the snapshot schedules, mounting of snaps to the backup server, and initiation of backup jobs from the snapshot, all in a single GUI that storage admins and DBAs can access simultaneously.

So we leveraged the backup application they already own, the DataDomain appliances they already own, storage arrays they already own, built a small high-bandwidth backup network, and layered some additional functionality, to drastically improve their ability to restore critical data.  The very next time they have a data integrity problem that requires a restore, these customer’s will save literally millions of dollars due to their ability to restore in minutes vs. hours.

If RPO’s of a few hours are not acceptable, then a Continuous Data Protection (CDP) solution could be added to this environment.  EMC RecoverPoint CDP can journal all database activity to be used to restore to any point in time, bringing data loss (RPO) to zero or near-zero, something no amount of snapshots can provide, and keeping restore time (RTO) within minutes (like snapshots).  Further, the journaled copy of the database can be stored on a different storage array providing complete protection for the entire hardware/software stack.  RecoverPoint CDP can be combined with Continuous Remote Replication (CRR) to replicate the journaled data to the DR site and provide near-zero RPO and extremely low RTO in a DR/BC scenario.  Backups could be transitioned to the DR site leveraging the RecoverPoint CRR copies to reduce or eliminate the need to replicate backup data.  EMC Replication Manager manages RecoverPoint jobs in the same easy to use GUI as snapshot and clone jobs.

There are a whole host of options available from EMC (and other storage vendors) to protect AND restore data in ways that traditional backup applications cannot match.  This does not mean that backup software is not also needed, as it usually ends up being a combined solution.

The key to architecting a restore solution is to start thinking about what would happen if you had to restore data, how that impacts the business and the bottom line, and then architect a solution that addresses the business’ need to run uninterrupted, rather than a solution that is focused on getting backups done in some arbitrary daily/nightly window.

NetApp and EMC: Exchange 2007 Replication

Posted on by

Exchange Replication

Building on the redundant storage project, we also wanted to replicate Exchange to a remote datacenter for disaster recovery purposes.  We’ve been using EMC CLARiiON MirrorView/A and Replication Manager for various applications up to now and decided we’d use NetApp/SnapMirror for Exchange to leverage the additional hardware as well as a way to evaluate NetApp’s replication functionality vs EMC’s.

On EMC Clariion storage, there are a couple choices for replicating applications like Exchange.
1.) Use MirrorView/Async with Consistency Groups to replicate Exchange databases in a crash-consistent state.
2.) Use EMC Replication Manager with Snapview snapshots and SANCopy/Incremental to update the remote site copy.

Similar to EMC’s Replication Manager, NetApp has SnapManager for various applications, which coordinates snapshots, and replica updates on a NetApp filer.

Whether using EMC RM or NetApp SM, software must be installed on all nodes in the Exchange cluster to quiesce the databases and initiate updates.  The advantage of Consistency groups with MirrorView is that no software needs to be installed in the host; all work is performed within the storage array.  The advantage of RM and SM/E is that database consistency is verified on each update and the software can coordinate restoring data to the same or alternate servers, which must be done manually if using MirrorView.

NetApp doesn’t support consistent snapshots across multiple volumes so the only option on a Filer is to use SnapManager for Exchange to coordinate snapshots and SnapMirror updates.

Our first attempt configuring SnapManager for Exchange actually failed when we ran into a compatibility issue with SnapDrive.  SnapManager depends on SnapDrive for mapping LUNs between the host and filer, and to communicate with the filer to create snapshots, etc.  We’d discussed our environment with NetApp and IBM ahead of time, specifically that we have Exchange CCR running on VMWare, with FiberChannel LUNs and everyone agreed that SnapDrive supports VMWare, Exchange, Microsoft Clustering, and VMWare Raw Devices.  It turns out that SnapDrive 6 DOES support all of this, but not all at the same time.  Specifically, MSCS clustering is not supported with FC Raw Devices on VMWare.  In comparison, EMC’s Replication Manager has supported this configuration for quite a while.  After further discussion NetApp confirmed that our environment was not supported in the current version of SnapDrive (6.0.2) and that SnapDrive 6.2, which was still in Beta, would resolve the issue.

Fast forward a couple months, SnapDrive 6.2 has been released and it does indeed support our environment so we’ve finally installed and configured SnapDrive and SnapManager.  We’ve dedicated the EMC side of the Exchange environment for the active nodes and the IBM for the passive nodes.  SnapManager snapshots the passive node databases, mounts them to run database verification, then updates the remote mirror using SnapMirror.

While SnapManager does do exactly what we need it to do, my experience with it hasn’t been great so far…  First, SnapManager relies on Windows Task Scheduler to run scheduled jobs, which has been causing issues.  The job will run on its schedule for a day, then stop after which the task must be edited to make it run again.  This happens in the lab and on both of our production Exchange clusters.  I also found a blog post about this same issue from someone else.

The other issue right now is that database verification takes a long time, due to the slow speed of ESEUTIL itself.  A single update on one node takes about 4 hours (for about 1TB of Exchange data) so we haven’t been able to achieve our goal of a 2-hour replication RPO.  IBM will be onsite next week to review our status and discuss any options.  An update on this will follow once we find a solution to both issues.  In the meantime I will post a comparison of replication tools between EMC and NetApp soon.

NetApp and EMC: ESX and Exchange 2007 CCR

Posted on by

The first application we tackled after deploying the NetApp system was Exchange 2007.  We had deployed Exchange 2007 recently, running in CCR clusters on VMWare ESX.  Since each node of a CCR cluster has it’s own copy of the database we wanted to put one node from each cluster onto the NetApp, leaving the other nodes on the Clariion.  This environment is entirely FiberChannel, no iSCSI deployed and as such the Exchange servers are using VMWare Raw Devices for the database and log disks.  This poses a problem that we didn’t discover until later which I will discuss in a future post about replicating Exchange with NetApp.

Re-Architecting the environment to fit the storage

The first thing we discovered was that neither IBM/NetApp nor EMC would support the same host HBAs zoned to multiple brands of storage.  So we had to split the ESX cluster into two clusters, one on each storage platform.  Luckily the Exchange environment was isolated on it’s own six node cluster so it was easy to split everything in half.

Next we learned that due to NetApp’s updated active/active mode with proxy paths in ONTap 7.3, VMWare ESX 3.x randomly selects paths when rescanning HBAs and will pick non-optimized paths to the LUNs.  This still works but is not ideal as it increases IO latency, causing the Filer to send autosupport emails periodically warning of the problem.  Installing the NetApp Host Utilities for ESX onto the ESX hosts themselves allows you to run a script that assigns persistent paths evenly across the HBAs.  The script works as advertised but as far as I can tell you have to run the script each time you add a new LUN to the ESX server.  It would be much better if it were more automated.

Actually, if you are running ESX4.0 the scenario changes since NetApp ONTap 7.3+, Clariion FLARE 26+, and ESX4 all support ALUA making this problem all but disappear and improving fabric resiliency. Unfortunately for us, ESX4 is still a bit new and hasn’t been rolled out into production yet.  NetApp also released tools for vCenter 4.0 that allow you to do the path assignment and other tasks from within vCenter rather than at the command line.  EMC also now has PowerPath available for ESX4.0 which will not only manage paths but load balance across all paths for increased performance and lower latency.

VirtualStorageGuy has blogged already about the NetApp/EMC/vSphere plug-ins and there is even a Powerpoint available.

Finally, during the sales process NetApp pushed their de-duplication features (A-SIS) quite a bit and stressed how much disk space we could save in a VMWare environment.  During deployment we were informed that if your VMs (VMDKs and VMFS) were not properly partition aligned de-duplication wouldn’t work well or at all.  Since this environment has several hundred VMs built over several years by many people, and aligning the system (C:) drive of a Windows VM is difficult, the benefit would be minimal for us.  Luckily NetApp has provided tools that can scan and align VMDKs without having to repartition the disks.  We have not tested this yet.  Partition Alignment is a best practice for ANY SAN storage system so we can’t fault NetApp for this problem; it’s just a fact of life.

But is it REALLY Redundant?

Even with two storage systems, with independent VMWare clusters, each hosting half of the Exchange cluster environment, a problem with either array could still take down and entire Exchange cluster.  This is due to the File Share Witness (FSW) component used in a Majority Node Set (MNS) cluster like Exchange CCR.  The idea behind the FSW in an MNS cluster is to prevent a condition known as Split Brain.  Since a MNS cluster does not have a quorum disk, it relies entirely on network communication between the nodes to determine cluster status and make decisions about which nodes should become active.  In the event that the two nodes lose communication with each other, each node will check for the FSW and if it is still available, it assumes that the other cluster node is down and proceeds to bring cluster resources online (if they weren’t already).  Without the FSW, both nodes would potentially go active and there could be issues with inconsistent data, etc.  This is the split-brain condition.

Typically, each cluster has a single FSW on a separate server (the CAS servers in our case).  With the redundancy storage model we moved to, the FSW became a single point of failure.  If we put the FSW on EMC storage with NodeA, and NodeB on the IBM/NetApp storage, a problem with the EMC array could take down both the cluster node AND the FSW at the same time.  The surviving cluster node on the IBM/NetApp array would go down or stay down to prevent split-brain since the FSW was not available.  Moving the FSW to the IBM/NetApp array presents the same problem on opposite side of the cluster.  Incidentally, we proved this problem in lab testing to be sure.  The solution is to move the FSW off of BOTH arrays, to either a dedicated physical server with internal disk, or a third storage array if you have one.  There was a second EMC array in production so we moved the FSW there.  In the new configuration, a complete outage of any single storage array would not take down the Exchange environment.

Crude diagram of the storage redundant Exchange CCR cluster

So far this new 3-way split environment is working fine, performance on the EMC and NetApp arrays is fine for Exchange.  Using the same number of disks on the NetApp array yields about twice as much usable space as the EMC due to RAID-DP vs RAID-10 but overall performance is similar.  Theoretically that means we could allow for more growth of the Exchange databases but in reality that is not always the case.  My next update will be about Exchange replication using SnapManager and SnapMirror and how that has effectively negated the remaining free space in the NetApp aggregate.