Tag Archives: emc

NetApp and EMC: Real world comparisons

Posted on by

I’ve been tasked recently on a project to increase availability of applications through the use of multiple/disparate storage systems.  This environment has heavily invested in EMC Clariion and Celerra storage systems over the past few years and needed a non-EMC platform from which to build the second half of a redundant storage environment.  For various reasons I won’t go into here, we chose IBM nSeries as that second platform. (Since the IBM system is rebranded NetApp FAS, I will refer to this as a NetApp filer.)  I’ve been working on implementing the new equipment as well as integrating it into the Business Continuity strategy.

The overall strategy is to continue to use the EMC Clariion/Celerra systems for production and disaster recovery replication and split applications between and across the two storage platforms for local redundancy.  The NetApp will also perform disaster recovery replication for some of the applications.  Here’s a really simple diagram that might help if the description is confusing:

EMC and NetApp Redundancy

EMC and NetApp Redundancy

Now this may sound easy, but it is, in fact, NOT straightforward.  This strategy requires close coordination with application owners and careful planning.  As we move forward on this project, I’ll talk about various idiosyncrasies, caveats, and problems we’ve faced, how we got around them, and I’ll also talk a lot about the differences between the Clariion/Celerra and NetApp platforms’ features and functionality, application support, and manageability.  These comparisons will include using both systems with FiberChannel connections as well as CIFS/NFS NAS, all in conjunction with DR replication and failover.

To start off, I figure we should compare some of the terminology between EMC and NetApp systems.  Some terms don’t directly translate, but I matched them up as close as I could and noted where there is no equivalent.   Below are two tables: one for Block Storage, and the other for NAS Storage.  Click on them to see full size versions.

EMC-NetApp Block Storage Terminology table

EMC-NetApp Block Storage Terminology

EMC-NetApp NAS Storage Terminology

EMC-NetApp NAS Storage Terminology

In the next update, I’ll start talking about the deployment itself.  The point of these articles is to discuss the differences, advantages, and disadvantages of each platform so that you can understand how each one might work in your environment.  I do not intend to disparage either platform or vendor.  I will try to be vendor agnostic as much as possible, and I do feel like I have a somewhat unique position of comparing new and recent hardware and firmware from both vendors, in the same production capacities, simultaneously, in the same environment.  I am NOT comparing old ONTap code to new FLARE/DART code or vise-versa, nor am I comparing old Clariion CX hardware to new NetApp/IBM hardware, etc.

Stay tuned!

Expanding a Thin LUN on Clariion CX4

Posted on by

Do you have an EMC Clariion CX4 with Virtual Provisioning (thin provisioning)?  Have you tried to expand the host visible (ie: maximum) size of a thin-LUN and can’t figure out how?  Well you aren’t alone…  Despite extremely little information available from EMC to say one way or the other, I finally figured out that you actually can’t expand a thin lun yet.  This was a surprise to me, since I had just assumed that would be there.  Thin-LUNs are pretty much virtual LUNs, and as such, they don’t have any direct block mapping to a RAID group that has to be maintained.  And traditional LUNs can be expanded using MetaLUN striping or concatenation.  Due to this restriction the host visible size of a thin-LUN cannot be edited after the LUN has been created.

But there IS a workaround.  It’s not perfect but its all we have right now.  Using CLARiiON’s built-in LUN Migration technology you can expand a thin-LUN in two steps.

Step 1: Migrate the thin-LUN to a thick-LUN of the same maximum size.

Step 2: Migrate the thick-LUN to a new thin-LUN that was created with the new larger size you want.  After migration, the thin-LUN will consume disk space equivalent to the old thin-LUN’s maximum size, but will have a new, higher host maximum visible size.

Two Steps to expand a thin LUN

Two Steps to expand a thin LUN

This requires that you have a RAID group outside of your thin pools that has enough usable free space to fit the temporary thick-LUN, so it’s not a perfect solution.  You’d think that migrating the old thin-LUN directly into a new, larger thin-LUN would work, but to use the additional disk space after a LUN migration, you have to edit the LUN size which, again, can’t be done on thin-LUNs.  I haven’t actually tested this, but this is based on all of the documentation I could find from EMC on the topic.

I’m looking into other methods that might be better, but so far it seems that certain restrictions on SnapView Clones and SANCopy might preclude those from being used.  The ability to expand thin-LUNs will come in a later FLARE release for those that are willing to wait.

Similarities between Crabbing and IT…

I attended an event tonight put on by Optistor Technologies, a local vendor, which was centered on data de-duplication.  Reps from EMC, DataDomain, and SilverPeak were there, chatting with customers about how their respective products leverage de-duplication technology to save you money.  The keynote speaker was Keith Colburn, captain of the Wizard, from Discovery Channel’s Deadliest Catch and they had paired the evening with a spread of crab and prawns to snack on.  Anyway, Keith admitted during his speech that he “doesn’t know a thing about WAN, re-dupe (he meant de-dupe), or any of that stuff” but I think he did an admirable job discussing the importance of picking the right vendors to work with, ones who have adequate support resources and top notch technology.  He related some of the problems he’s had in the Bering Sea and how he attempts to mitigate risk as much as possible but sometimes discovers problems that he never anticipated.  That’s where the tie-ins with vendor choice came together.  All in all it was a fun event and I got a chance to catch up with some of the engineers and account managers I’ve dealt with in the past couple years.

Clariion CX4 – New Features in Flare 29

EMC released FLARE 29 (04.29.000.5.001) for Clariion CX4 systems a few days ago.  The release of the CX4 hardware platform was a very significant upgrade for the Clariion series – moving to a 64bit operating system, implementing hot-add and hot-swap I/O modules, as well as multi-core CPUs for higher performance and scalability.  FLARE 28 was released to support the CX4 platform and introduced Virtual Provisioning (EMC’s name for thin provisioning) to the Clariion feature list.

According to the release notes FLARE 29 adds a couple of new features and builds on existing ones.  All of them will become available with the standard non-disruptive upgrade Clariion owners are accustomed to.

New Features in FLARE 29:

  • 2-port 10Gbps iSCSI IO modules are now available
  • The ability to hot swap IO modules for faster modules (ie: upgrade from 4gb FC to 8gb FC)
  • Idle SATA drives can now spin down to save power
  • iSCSI ports now support VLAN tagging
  • LUNs and MetaLUNs can now be shrunk if using new versions of Windows (ie: Windows 2008)

Additional Updates:

  • The maximum number of LUNs has been increased for all CX4 models (to 8192 for the CX4-960)
  • MirrorView now supports replicating to and/or from Thin provisioned LUNs
  • SANCopy now supports copying to and/or from Thin provisioned LUNs
  • The maximum number of MirrorView/Async sessions has increased to 256 and up to 64 consistency groups with up to 64 mirrors per group.  Up to 512 MirrorView/Sync sessions are supported on CX4-960 hardware.

The really big news here is with MirrorView.  When Virtual Provisioning was released in FLARE 28 on CX4 you could not use MirrorView with any thin LUNs, whether they were the source or destination.  This limited the use of thin LUNs to those applications and/or customers that don’t need or use array-based replication.  EMC’s RecoverPoint product did support thin LUNs but that is a separate, fairly expensive, solution.  The ability to replicate non-thin (fat) LUNs to thin LUNs could be really useful for maximizing the disk utilization at a DR location where performance isn’t a primary concern.

The added ability to upgrade I/O modules to faster versions while online is also very handy.  It means you can tackle problems like increasing iSCSI bandwidth or upgrading core infrastructure (ie: network and SAN switches) with little or no downtime on the storage system.  VLAN tagging with iSCSI can be useful for sharing a storage system between disparate server environments that need to be separated for security or performance reasons.

As a Clariion and MirrorView user myself, I look forward to taking advantage of the added thin provisioning support with our upcoming CX4-960.

Backup Everything – The datacenter done fast!

Posted on by

I support a very diverse environment with a mix of Windows, Netware, Linux, Solaris, and Mac clients running on standard servers as well as VMWare ESX, plus two different brands of NAS, a few iSeries systems, and an Apple XSAN thrown in for good measure.  We have hundreds of applications running on these systems including SQL, Oracle, MySQL, Sharepoint, Documentum, and Agile.  These applications are mostly contained in our primary datacenter but we also have a few remote datacenters for specific applications and for disaster recovery as well as a couple remote business offices.

Recently I’ve been working on a project to replace our existing backup application with a new one.  We were experiencing extremely long backup windows, low throughput per client, and high backup failure rates with our existing solution and it was time to make a change of some kind.  The goal was to protect all of our systems regardless of their location with both an onsite backup in our primary datacenter and an offsite copy for disaster recovery purposes.  Additionally we wanted to use little or no tape.  After research, lots of vendor meetings, a consulting engagement, and lengthy debate we chose Symantec NetBackup with Symantec NetBackup PureDisk and DataDomain.  This combination was chosen for several reasons which will become clearer below.

For those of you who are not familiar with these products here’s a brief description..

Symantec Netbackup is a traditional backup solution that is designed to move data from many clients, as fast as possible, to disk or tape.  It is similar to EMC Networker, Symantec BackupExec, and any number of other backup products.  NetBackup supports a wide variety of clients, NAS devices, applications (SQL, Exchange, etc), as well as tape libraries and disk storage for the backed up data.  Since it simply copies all of the data that resides on the client directly to the backup server it is not particularly tuned for backing up remote offices across the WAN but it can easily flood a local LAN during a backup.

Symantec NetBackup PureDisk is currently a separate solution from the base NetBackup product; it is designed specifically for backing up data over the WAN.  Puredisk is a “source-dedupe” solution and is very similar in function to EMC’s Avamar product with which I have a long standing love affair. PureDisk performs an incremental-forever style of backup where only the data that changed since the last backup is copied to the backup server.  It then uses deduplication technology to reduce the resulting backup dataset down to an even smaller size before it gets copied across the network.  The data is collected and stored (in it’s deduplicated form) on the backup server.  With this design PureDisk saves network bandwidth as well as disk space on the backup server making it ideal for backups across the WAN, VPN, etc.  Symantec’s goal is to merge PureDisk into NetBackup as a single solution at some point probably next year.  PureDisk backup servers can replicate backed up data to other PureDisk backup servers in de-duplicated form for redundancy across sites. The downside to PureDisk is that raw throughput on a PureDisk backup server is not high enough for datacenter use and client support is more limited than the standard NetBackup product.

DataDomain (now part of EMC) has been making it’s DDR products for a while now and has been very successful (prompting the recent bidding war between NetApp and EMC to purchase the company).  DataDomain appliances are “target-dedupe” devices that are designed to replace tape libraries in traditional backup environments, like Netbackup.  The DDR appliance presents itself as a VTL (virtual tape library) via SAN, a CIFS(Windows) file server, and/or a NFS(UNIX) file server making it compatible with pretty much any type of backup system.  DataDomain also supports Symantec’s OpenSTorage (OST) API which is available in Netbackup 6.5.  The DDR system receives all of the data that Netbackup copies from backup clients, deduplicates the data in real-time, then stores it on it’s own internal disk.  Because the DDR is purpose built and has fast processors it can process data at relatively high throughput rates.  For example, a single DD690 model is rated at 2.7TB/hour (about 6gbps) when using OST.  The deduplication in a DDR provides disk-space savings but does not reduce the amount of data copied from backup clients.  DDRs can also replicate data (in deduplicated form) to other DDRs across the LAN or WAN, great for offsite backups.

For an explanation of de-duplication, check out my prior post on the topic..

Two of the challenges we faced when designing the final solution had to do with the cost per TB of DataDomain disk and the slightly limited client OS support of PureDisk.  But we had a clean slate to work from–there was no interest in utilizing any of the existing backup infrastructure aside from the two IBM tape libraries we had.  We were not required to use the libraries but we wouldn’t be buying new ones if we planned on using tape as part of the new solution.

For the primary datacenter we deployed NetBackup Master and Media servers, a DataDomain DD690, and connected them to each other with Cisco 4900M 10gbps switches.  We deployed a warm-standby master server plus a media server and another DD690 in our DR datacenter but did not use 10gbs there due to the additonal cost.

With this set up we covered all of the clients in our primary datacenter.  Systems that have large amounts of data (like Microsoft Exchange, SAS Financials, etc) were connected directly to the 4900M switches (via 1gbps connections).  Aggregate throughput of the backups during a typical night averages 400-500MB/sec with all of the data going to the DataDomain.  The Exchange server’s flood their network links pushing over 100MB/sec per server when backing up the email databases.  We currently back up 9TB of data per night with 3 media servers and a single DDR in about 5 hours.  Our primary bottlenecks are with the VCB Proxy server (we need more of them) and the aging datacenter core network having an aggregate throughput of a barely more than 1gbps.

But what about those remote sites?  What does OST really add?  How do you tackle the NAS backups without resorting to tape?  All that and more is coming up soon…