Tag Archives: 2007

Does EMC FASTCache work with Exchange?

Posted on by

Short Answer: Yes!

In my dealings with customers I’ve been requesting performance data from their storage systems whenever I can to see how different applications and environments react to new features. Today I’m going to give you some more real-world data, straight from a customer’s production EMC NS480.

I’ve pulled various stats out of Analyzer for this customer’s Exchange server, which has 3 mail databases totaling about 1TB of mail stored on the NS480 via FibreChannel connect. Since this customer is not extremely large (similar to most of our customers) they are using this NS480 for pretty much everything from VMWare, SQL, and Exchange, to NAS, web/app content, and Business Intelligence systems. There is about 30TB of block data and another 100TB of NAS data. FASTCache is enabled for all LUNs and Pools with just 183GB of usable FASTCache space (4 x 100GB SSDs). So in this environment, with a modest amount of FASTCache and very mixed workload, how does Exchange fare?

Let’s first take a look at the Exchange workload itself for a 24 hour period: (Note: There were no reads from the Exchange log LUNs to speak of so I left that out of this analysis.)

Total Read IOPS for the 3 databases: (the largest peak is a result of database maintenance jobs and the smaller peaks are due to backup jobs) Here it’s tough to see due to the maintenance and backup peaks, but production IO during the work day is about 200-400IOPS. By the way, a source-deduplicating incremental-forever backup technology, such as Avamar, could drastically reduce the IO Load and duration of the nightly backup

Total Write IOPS for the 3 databases: Obviously more changes to the database occurring during the work day.

Total Write IOPS for the 3 Log files: Log data is typically cached easily in the SP cache so FAST Cache isn’t terribly required here but I’m including it to show whether there is any value to using FASTCache with Exchange logs.

Now let’s look at the FASTCache hit ratios for this same set of data: (average of all 3 DBs)

First, the Read Activity: Here you can see that aside from the maintenance and backup jobs, FASTCache is servicing 70-90% of the Read IOPs. Keep in mind that a FASTCache miss could still be a Cache Hit if the data is in SP Cache. What’s interesting about this is that it looks like the nightly maintenance job is pushing the highest load.

And the Write Activity: The beauty of EMC’s FASTCache implementation being a read/write cache, the benefit extends beyond just read IO. Here you see that FASTCache is servicing 60-80% of the writes for these Exchange Databases. That’s a huge load off the backend disks.

And the Log Writes: Since Log writes are usually not a performance problem, I would say that FASTCache is not necessary here, and the average 30% hit ratio shown here is not great. If you wanted to spend the time to tune FASTCache a bit, you might consider disabling FASTCache for Log LUNs to devote the FASTCache capacity to more cache friendly workloads.

All in all you can see that for the database data, FASTCache is servicing a significant portion of the user generated workload, reducing the backend disk load and improving overall performance.

Hopefully this gives you a sense of what FASTCache could do for your Exchange environment, reducing backend disk workload for reads AND writes. I must reiterate, since an SP Cache hit is shown as a FASTCache miss, an 80% FASTCache hit ratio does not mean that 20% of the IOs are hitting disk. To illustrate this, I’ve graphed the sum of SP Cache Hits and FAST Cache Hits for a single database. You can see that in many cases we’re hitting a total of 100% cache hits.

Most interesting is the backup window where SP Cache is really handling a huge amount of the load. This is actually due to the Prefetch algorithms kicking in for the sequential read profile of a backup, something CX/VNX is very good at.

NetApp and EMC: Exchange 2007 Replication

Posted on by

Exchange Replication

Building on the redundant storage project, we also wanted to replicate Exchange to a remote datacenter for disaster recovery purposes.  We’ve been using EMC CLARiiON MirrorView/A and Replication Manager for various applications up to now and decided we’d use NetApp/SnapMirror for Exchange to leverage the additional hardware as well as a way to evaluate NetApp’s replication functionality vs EMC’s.

On EMC Clariion storage, there are a couple choices for replicating applications like Exchange.
1.) Use MirrorView/Async with Consistency Groups to replicate Exchange databases in a crash-consistent state.
2.) Use EMC Replication Manager with Snapview snapshots and SANCopy/Incremental to update the remote site copy.

Similar to EMC’s Replication Manager, NetApp has SnapManager for various applications, which coordinates snapshots, and replica updates on a NetApp filer.

Whether using EMC RM or NetApp SM, software must be installed on all nodes in the Exchange cluster to quiesce the databases and initiate updates.  The advantage of Consistency groups with MirrorView is that no software needs to be installed in the host; all work is performed within the storage array.  The advantage of RM and SM/E is that database consistency is verified on each update and the software can coordinate restoring data to the same or alternate servers, which must be done manually if using MirrorView.

NetApp doesn’t support consistent snapshots across multiple volumes so the only option on a Filer is to use SnapManager for Exchange to coordinate snapshots and SnapMirror updates.

Our first attempt configuring SnapManager for Exchange actually failed when we ran into a compatibility issue with SnapDrive.  SnapManager depends on SnapDrive for mapping LUNs between the host and filer, and to communicate with the filer to create snapshots, etc.  We’d discussed our environment with NetApp and IBM ahead of time, specifically that we have Exchange CCR running on VMWare, with FiberChannel LUNs and everyone agreed that SnapDrive supports VMWare, Exchange, Microsoft Clustering, and VMWare Raw Devices.  It turns out that SnapDrive 6 DOES support all of this, but not all at the same time.  Specifically, MSCS clustering is not supported with FC Raw Devices on VMWare.  In comparison, EMC’s Replication Manager has supported this configuration for quite a while.  After further discussion NetApp confirmed that our environment was not supported in the current version of SnapDrive (6.0.2) and that SnapDrive 6.2, which was still in Beta, would resolve the issue.

Fast forward a couple months, SnapDrive 6.2 has been released and it does indeed support our environment so we’ve finally installed and configured SnapDrive and SnapManager.  We’ve dedicated the EMC side of the Exchange environment for the active nodes and the IBM for the passive nodes.  SnapManager snapshots the passive node databases, mounts them to run database verification, then updates the remote mirror using SnapMirror.

While SnapManager does do exactly what we need it to do, my experience with it hasn’t been great so far…  First, SnapManager relies on Windows Task Scheduler to run scheduled jobs, which has been causing issues.  The job will run on its schedule for a day, then stop after which the task must be edited to make it run again.  This happens in the lab and on both of our production Exchange clusters.  I also found a blog post about this same issue from someone else.

The other issue right now is that database verification takes a long time, due to the slow speed of ESEUTIL itself.  A single update on one node takes about 4 hours (for about 1TB of Exchange data) so we haven’t been able to achieve our goal of a 2-hour replication RPO.  IBM will be onsite next week to review our status and discuss any options.  An update on this will follow once we find a solution to both issues.  In the meantime I will post a comparison of replication tools between EMC and NetApp soon.

NetApp and EMC: ESX and Exchange 2007 CCR

Posted on by

The first application we tackled after deploying the NetApp system was Exchange 2007.  We had deployed Exchange 2007 recently, running in CCR clusters on VMWare ESX.  Since each node of a CCR cluster has it’s own copy of the database we wanted to put one node from each cluster onto the NetApp, leaving the other nodes on the Clariion.  This environment is entirely FiberChannel, no iSCSI deployed and as such the Exchange servers are using VMWare Raw Devices for the database and log disks.  This poses a problem that we didn’t discover until later which I will discuss in a future post about replicating Exchange with NetApp.

Re-Architecting the environment to fit the storage

The first thing we discovered was that neither IBM/NetApp nor EMC would support the same host HBAs zoned to multiple brands of storage.  So we had to split the ESX cluster into two clusters, one on each storage platform.  Luckily the Exchange environment was isolated on it’s own six node cluster so it was easy to split everything in half.

Next we learned that due to NetApp’s updated active/active mode with proxy paths in ONTap 7.3, VMWare ESX 3.x randomly selects paths when rescanning HBAs and will pick non-optimized paths to the LUNs.  This still works but is not ideal as it increases IO latency, causing the Filer to send autosupport emails periodically warning of the problem.  Installing the NetApp Host Utilities for ESX onto the ESX hosts themselves allows you to run a script that assigns persistent paths evenly across the HBAs.  The script works as advertised but as far as I can tell you have to run the script each time you add a new LUN to the ESX server.  It would be much better if it were more automated.

Actually, if you are running ESX4.0 the scenario changes since NetApp ONTap 7.3+, Clariion FLARE 26+, and ESX4 all support ALUA making this problem all but disappear and improving fabric resiliency. Unfortunately for us, ESX4 is still a bit new and hasn’t been rolled out into production yet.  NetApp also released tools for vCenter 4.0 that allow you to do the path assignment and other tasks from within vCenter rather than at the command line.  EMC also now has PowerPath available for ESX4.0 which will not only manage paths but load balance across all paths for increased performance and lower latency.

VirtualStorageGuy has blogged already about the NetApp/EMC/vSphere plug-ins and there is even a Powerpoint available.

Finally, during the sales process NetApp pushed their de-duplication features (A-SIS) quite a bit and stressed how much disk space we could save in a VMWare environment.  During deployment we were informed that if your VMs (VMDKs and VMFS) were not properly partition aligned de-duplication wouldn’t work well or at all.  Since this environment has several hundred VMs built over several years by many people, and aligning the system (C:) drive of a Windows VM is difficult, the benefit would be minimal for us.  Luckily NetApp has provided tools that can scan and align VMDKs without having to repartition the disks.  We have not tested this yet.  Partition Alignment is a best practice for ANY SAN storage system so we can’t fault NetApp for this problem; it’s just a fact of life.

But is it REALLY Redundant?

Even with two storage systems, with independent VMWare clusters, each hosting half of the Exchange cluster environment, a problem with either array could still take down and entire Exchange cluster.  This is due to the File Share Witness (FSW) component used in a Majority Node Set (MNS) cluster like Exchange CCR.  The idea behind the FSW in an MNS cluster is to prevent a condition known as Split Brain.  Since a MNS cluster does not have a quorum disk, it relies entirely on network communication between the nodes to determine cluster status and make decisions about which nodes should become active.  In the event that the two nodes lose communication with each other, each node will check for the FSW and if it is still available, it assumes that the other cluster node is down and proceeds to bring cluster resources online (if they weren’t already).  Without the FSW, both nodes would potentially go active and there could be issues with inconsistent data, etc.  This is the split-brain condition.

Typically, each cluster has a single FSW on a separate server (the CAS servers in our case).  With the redundancy storage model we moved to, the FSW became a single point of failure.  If we put the FSW on EMC storage with NodeA, and NodeB on the IBM/NetApp storage, a problem with the EMC array could take down both the cluster node AND the FSW at the same time.  The surviving cluster node on the IBM/NetApp array would go down or stay down to prevent split-brain since the FSW was not available.  Moving the FSW to the IBM/NetApp array presents the same problem on opposite side of the cluster.  Incidentally, we proved this problem in lab testing to be sure.  The solution is to move the FSW off of BOTH arrays, to either a dedicated physical server with internal disk, or a third storage array if you have one.  There was a second EMC array in production so we moved the FSW there.  In the new configuration, a complete outage of any single storage array would not take down the Exchange environment.

Crude diagram of the storage redundant Exchange CCR cluster

So far this new 3-way split environment is working fine, performance on the EMC and NetApp arrays is fine for Exchange.  Using the same number of disks on the NetApp array yields about twice as much usable space as the EMC due to RAID-DP vs RAID-10 but overall performance is similar.  Theoretically that means we could allow for more growth of the Exchange databases but in reality that is not always the case.  My next update will be about Exchange replication using SnapManager and SnapMirror and how that has effectively negated the remaining free space in the NetApp aggregate.