I attended an event tonight put on by Optistor Technologies, a local vendor, which was centered on data de-duplication. Reps from EMC, DataDomain, and SilverPeak were there, chatting with customers about how their respective products leverage de-duplication technology to save you money. The keynote speaker was Keith Colburn, captain of the Wizard, from Discovery Channel’s Deadliest Catch and they had paired the evening with a spread of crab and prawns to snack on. Anyway, Keith admitted during his speech that he “doesn’t know a thing about WAN, re-dupe (he meant de-dupe), or any of that stuff” but I think he did an admirable job discussing the importance of picking the right vendors to work with, ones who have adequate support resources and top notch technology. He related some of the problems he’s had in the Bering Sea and how he attempts to mitigate risk as much as possible but sometimes discovers problems that he never anticipated. That’s where the tie-ins with vendor choice came together. All in all it was a fun event and I got a chance to catch up with some of the engineers and account managers I’ve dealt with in the past couple years.
Monthly Archives: September 2009
Okay, now that I’ve talked about backing up the datacenter with NetBackup and DataDomain, and backing up remote sites with NetBackup and PureDisk, it’s time to discuss how to get all that data offsite to protect against a catastrophic event at the datacenter.
As mentioned before we have a primary datacenter with the majority of our systems including the backup environment, and a secondary “disaster recovery” datacenter to which we replicate tier 1 applications for business continuity purposes. Since we really wanted to get away from using tapes and instead store the backups on disk in our datacenter we have a second backup environment in the DR datacenter and we replicate the backup data there.
There are several ways to replicate backup data between two sites but most of them have drawbacks..
1.) Duplicate the backup data from disk to tape and ship the tapes to the remote site to be ready for restore. This is the easiest and probably cheapest way. But there’s that pesky tape yet again with it’s media handling and shipping. And restore could take a while since you have to deal with restoring the catalog from tape, then importing the media, etc.
2.) Duplicate the backup data directly from the local disk to the disk in the second location across the WAN. This is not very feasible with any significant amount of data because every byte of data that is backed up in the datacenter has to be copied across the much slower WAN. It could take many days to duplicate a single nights’ backup. You’d also need a special Catalog backup job that wrote to a storage device across the WAN. The good here is that the backup application knows there is a second copy of the data and knows how to find it.
3.) Replicate the data with the backup storage devices’s native replication. Whether it’s PureDisk, Avamar, or DataDomain, pretty much every source-based or target-based deduplication solution has replication built in that leverages the deduplication to reduce the amount of data that traverses the WAN. The advantage here is that you can have a copy of all of your backup data in a second location in a much shorter time than a traditional copy process. If your deduplication device stores the data with 10:1 compression, then your WAN usage is reduced by 90%. The savings in practice is actually better than that. The drawback is that the backup application (hence the catalog) has no knowledge that there is a second copy of the backup data and after recovering the catalog, you would need to import all of the disk-based media which could take a long time.
4.) Leverage NetBackup Lifecycle Policies with Symantec OpenSTorage (OST) and an OST-capable backup storage system like DataDomain or PureDisk(with PDDO). Basically this has all the advantages of option #2, where it is a catalog-aware duplication, combined with the advantages of WAN bandwidth savings from option #3. Time to copy the data offsite is much shorter due to deduplication, and time to restore is very fast since the data is already in the catalog and available on the disk.
OpenSTorage (OST) is a network protocol that Symantec developed to interface with disk-based backup storage systems and DataDomain was an early adopter of OST. OST allows Netbackup to control replication between OST-capable storage systems and keep track of the replicated copies of backups in the Catalog just as if Netbackup had made both copies itself. OST is also used as the protocol to send the backup data to the storage device as opposed to CIFS/NFS or VTL. DataDomain appliances support OST as does PureDisk when used in conjunction with the PDDO option discussed earlier. In NetBackup, replication controlled by OST is called “optimized duplication” and is controlled primarily through Lifecycle Policies.
Traditionally, when creating NetBackup job policies, the administrator will specify a Storage Unit (either a disk storage unit or a tape library or drive) that the job policy will send backups to. Lifecycle Policies are treated like Storage Units as far as the Job Policy is concerned but the Lifecycle Policy includes a list of storage units, each with it’s own data retention, that the backup data must be stored onto in order for NetBackup to consider the data fully protected. Typically there is a “Backup” target which is where the actual data coming from the client is stored, followed by one or more “Duplication” targets. After the backup job completes, NetBackup will copy the backup data from the “backup” location to all of the “duplication” locations. This works with pretty much any type of storage and you can mix and match tape and disk in the same policy. Since these are duplication operations, NetBackup will read ALL of the data from the backup location, and write ALL of the data to each duplication location. This can take a long time even on the local network and trying to offsite a lot of data over the WAN is not very feasible.
With OST, the lifecycle policy operates exactly the same except that it uses “optimized duplication”, instructing the storage device to copy the file rather than performing the copy through a media server. So in the case of DataDomain, OST issues the command to the DDR, the DDR then copies the file to the second DDR in the remote site and gets all the benefits of deduplication and compression between the two. The media server doesn’t actually do any work. Once the duplication is complete, the DDR notifies NetBackup and the catalog is updated with a record of the second copy of the backup. Lifecycle Policies are fully automated, you can’t even restart a failed duplication, so in the event of a transient failure like a WAN hiccup NetBackup will retry a duplication job forever until it succeeds in order to satisfy the lifecycle policy.
As you can probably surmise, this is REALLY nice for a tape-less backup environment. Our DD690 offsites over 9TB of data every night DURING the backup window. When the last backup job completes, the offsite copies are complete within 30 minutes. And there is absolutely no management of the offsite process or duplication jobs besides configuring the lifecycle policies up front. The drawback to regular Netbackup lifecycle policies is that all duplications are taken from the initial backup copy which limits what you can do with the copies.
Enter NetBackup 6.5.4… Despite the small 6.5.3 -> 6.5.4 version number change, the 6.5.4 release had quite a few new features added. The biggest one was a revamping of the Lifecycle Policy engine to allow for nested duplications. Now you can create a copy of a backup, then create multiple copies from the copy, then create copies from the other copies. Why is this useful?
Remember when I discussed using NetBackup with PDDO to backup remote sites? Well the data backed up from the remote site is all stored in the primary datacenter and we need to get the second copy to the DR datacenter. Plus, we wanted to have a small cache of recently backed up data sitting on the remote media server for fast restore. Well, nested lifecycle’s are the key. The lifecycle writes the initial backup copy onto the media server’s local disk which is configured as a capacity-managed staging area (ie: it stores as much as it can and expires data when it needs more space for new backups). The lifecycle then creates a duplicate of the backup onto the PureDisk storage unit in the primary datacenter. Since bandwidth to the remote site is very limited we don’t want to copy it from the remote site twice so the lifecycle has a second duplication nested under the first to copy it to the DR datacenter. The source of the second copy is the primary datacenter copy, NOT the remote media server copy.
Where else can we use this? Let’s consider our tape-less datacenter backups.. We backup the clients to the DataDomain in our primary datacenter, then using a lifecycle policy and OST, create a copy on the DataDomain in the DR datacenter. If we also wanted to have a tape copy for long term archive or vaulting we could create a nested duplication to make a copy to a tape library in the DR datacenter from the disk copy that is also in the DR datacenter. Without nested lifecycle’s the only workable solution would be to create the tape in the primary datacenter. Every copy of the backup made via the lifecycle policy whether it is using OST or not is maintained by the catalog and easily used for restore. Furthermore, using OST as the protocol between Netbackup and DataDomain actually increases throughput to the DataDomain DDR systems by approximately 2X vs VTL/CIFS/NFS.
Now to the caveats.. Optimized duplication via OST is only available when you are using OST as the protocol between the media server and the storage unit. This means it doesn’t work with VTL even when the DataDomain IS the VTL. OST only works over an ethernet network which is why we skipped VTL completely and used 10gbps networks for the DDR connections. We even skipped VTL/Tape for the NAS systems, connected them directly to the 10gbps network and use 3-way NDMP to backup them up over the network, through the media servers, to the DataDomain. We get the benefit of lifecycle policies, optimized duplication, and I may have mentioned before–no pesky tape even with NDMP/NAS backups. And the interesting thing is that with the 10gbps connection, the NDMP dumps are faster than direct fiber to tape.
There were other enhancements to NetBackup 6.5.4 centered around OST functionality but the lifecycle policy improvements were huge in my opinion.
To cover the catalog replication, we run Netbackup hot catalog backups to a CIFS share that is hosted by the DataDomain. The DDR replicates that share using DataDomain native replication to the DDR in the DR datacenter where the same data is available via a similar CIFS share. Our standby Netbackup master server is already connected to the CIFS share for catalog restore and connected to the DDR via OST. A single operation restores the catalog from the replicated copy. In a real disaster we can begin restoring user data within 30 minutes from the DR datacenter.
In a previous post I discussed the new backup environment I’ve been deploying, what solutions we picked, and how they apply to the datacenter. But I also mentioned that we had remote sites with systems we need to back up but I didn’t explain how we addressed them. Frankly, the previous post was getting long and backing up remote offices is tricky so it deserved it’s own discussion.
Now that we had Symantec NetBackup running in the datacenter, backup up the bulk of our systems to disk by way of DataDomain, we need to look at remote sites. For this we deployed Symantec NetBackup PureDisk. Despite the fact that it has NetBackup in the name, PureDisk is an entirely different product with it’s own servers, clients, and management interfaces. There are some integration points that are not-obvious at first but become important later. Essentially PureDisk is two solutions in a single product — 1:) a “source-dedupe” backup solution that can be deployed independent of any other solution, and 2:) a “target-dedupe” backup storage appliance specifically integrated with the core NetBackup product via an option called PDDO.
As previously discussed, backing up a remote site across a WAN is best accomplished with a source-dedupe solution like PureDisk or Avamar. This is exactly what we intended to do. Most of our remote site clients are some flavor of UNIX or Windows and installing PureDisk clients was easily accomplished. Backup policies were created in PureDisk and a little over a day later we had the first full backup complete. All subsequent nightly backups transfer very small amounts of data across the WAN because they are incremental backups AND because the PureDisk client deduplicates the data before sending it to the PureDisk server. The downside to this is that the PureDisk jobs have to scheduled, managed, and monitored from the PureDisk interface, completely separate from the NetBackup administration console. Backups are sent to the primary datacenter and stored on the local PureDisk server, then the backed up data is replicated to the PureDisk server in the DR datacenter using PureDisk native replication. Restores can be run from either of the PureDisk servers but must un-deduplicate the data before sending across the WAN making restores much slower than backups. This was a known issue and still meets our SLAs for these systems.
Our biggest hurdle with PureDisk was the client OS support. Since we have a very diverse environment we ran into a couple clients which had operating systems that PureDisk does not support. Both Netware and x86 versions of Solaris are currently not supported, both of which were running in our remote sites.
We had a few options:
1.) Use the standard NetBackup client at the remote site and push all of the data across the WAN
2.) Deploy a NetBackup media server in the remote site with a tape library and send the tapes offsite
3.) Deploy a NetBackup media server in the remote site with a small DataDomain appliance and replicate
4.) Deploy a NetBackup media server and ALSO use PureDisk via the PDDO option (PureDisk Deduplication Option)
Option 1 is not feasible for any serious amount of data, Option 2 requires a costly tape library and some level of media handling every day, and Option 3 just plain costs too much money for a small remote site.
Option 4, using PDDO, leverages PureDisk’s “target-dedupe” persona and ends up being a very elegant solution with several benefits.
PDDO is a plug-in that installs on a Netbackup media server. The PDDO plug-in deduplicates data that is being backed up by that media server and sends it across the network to a PureDisk server for storage. The beauty of this option is that we were able to put a Netbackup media server in our remote site without any tape or other storage. The data is copied from the client to the media server over the LAN, de-duplicated by PDDO, then sent over the WAN to the datacenter’s PureDisk server. We get the bandwidth and storage efficiencies of PureDisk while using standard NetBackup clients. A byproduct of this is that you get these PureDisk benefits without having to manage the backups in PureDisk’s separate management console. To reduce the effects of the WAN on the performance of the backup jobs themselves, and to make the majority of restores faster, we put some internal disk on the media server that the backup jobs write to first. After the backup job completes to the local disk, NetBackup duplicates the backup data to the PureDisk storage server, then duplicates another copy to the DR datacenter. This is all handled by NetBackup lifecycle policies which became about 1000X more powerful with the 6.5.4 release. I’ll discuss the power of lifecycle policies, specifically with the 6.5.4 release, when I talk about OST later.
So the result of using PureDisk/PDDO/NetBackup together is a seamless solution, completely managed from within NetBackup, with all the client OS support the core NetBackup product has, the WAN efficiencies of source-dedupe, the storage efficiencies of target-dedupe, and the restore performance of local storage, but with very little storage in the remote site.
Remote Site Backup… Done!!
For the near future, I’m considering putting NetBackup media servers with PDDO on VMWare in all of the remote sites so I can manage all of the backups in NetBackup without buying any new hardware at all. This is not technically supported by Symantec but there is no tape/scsi involved so it should work fine. Did I mention we wanted to avoid tape as much as possible?
Incidentally, despite my love for Avamar, I don’t believe they have anything like PDDO available in the Networker/Avamar integration and Avamar’s client OS support, while better than PureDisk’s, is still not quite as good as Netbackup and Networker.
Okay, so how does OST play into NetBackup, PureDisk, PDDO, and DataDomain? What do the lifecycle policies have to do with it? And what is so damned special about lifecycle policies in NetBackup 6.5.4? All that is next…
I support a very diverse environment with a mix of Windows, Netware, Linux, Solaris, and Mac clients running on standard servers as well as VMWare ESX, plus two different brands of NAS, a few iSeries systems, and an Apple XSAN thrown in for good measure. We have hundreds of applications running on these systems including SQL, Oracle, MySQL, Sharepoint, Documentum, and Agile. These applications are mostly contained in our primary datacenter but we also have a few remote datacenters for specific applications and for disaster recovery as well as a couple remote business offices.
Recently I’ve been working on a project to replace our existing backup application with a new one. We were experiencing extremely long backup windows, low throughput per client, and high backup failure rates with our existing solution and it was time to make a change of some kind. The goal was to protect all of our systems regardless of their location with both an onsite backup in our primary datacenter and an offsite copy for disaster recovery purposes. Additionally we wanted to use little or no tape. After research, lots of vendor meetings, a consulting engagement, and lengthy debate we chose Symantec NetBackup with Symantec NetBackup PureDisk and DataDomain. This combination was chosen for several reasons which will become clearer below.
For those of you who are not familiar with these products here’s a brief description..
Symantec Netbackup is a traditional backup solution that is designed to move data from many clients, as fast as possible, to disk or tape. It is similar to EMC Networker, Symantec BackupExec, and any number of other backup products. NetBackup supports a wide variety of clients, NAS devices, applications (SQL, Exchange, etc), as well as tape libraries and disk storage for the backed up data. Since it simply copies all of the data that resides on the client directly to the backup server it is not particularly tuned for backing up remote offices across the WAN but it can easily flood a local LAN during a backup.
Symantec NetBackup PureDisk is currently a separate solution from the base NetBackup product; it is designed specifically for backing up data over the WAN. Puredisk is a “source-dedupe” solution and is very similar in function to EMC’s Avamar product with which I have a long standing love affair. PureDisk performs an incremental-forever style of backup where only the data that changed since the last backup is copied to the backup server. It then uses deduplication technology to reduce the resulting backup dataset down to an even smaller size before it gets copied across the network. The data is collected and stored (in it’s deduplicated form) on the backup server. With this design PureDisk saves network bandwidth as well as disk space on the backup server making it ideal for backups across the WAN, VPN, etc. Symantec’s goal is to merge PureDisk into NetBackup as a single solution at some point probably next year. PureDisk backup servers can replicate backed up data to other PureDisk backup servers in de-duplicated form for redundancy across sites. The downside to PureDisk is that raw throughput on a PureDisk backup server is not high enough for datacenter use and client support is more limited than the standard NetBackup product.
DataDomain (now part of EMC) has been making it’s DDR products for a while now and has been very successful (prompting the recent bidding war between NetApp and EMC to purchase the company). DataDomain appliances are “target-dedupe” devices that are designed to replace tape libraries in traditional backup environments, like Netbackup. The DDR appliance presents itself as a VTL (virtual tape library) via SAN, a CIFS(Windows) file server, and/or a NFS(UNIX) file server making it compatible with pretty much any type of backup system. DataDomain also supports Symantec’s OpenSTorage (OST) API which is available in Netbackup 6.5. The DDR system receives all of the data that Netbackup copies from backup clients, deduplicates the data in real-time, then stores it on it’s own internal disk. Because the DDR is purpose built and has fast processors it can process data at relatively high throughput rates. For example, a single DD690 model is rated at 2.7TB/hour (about 6gbps) when using OST. The deduplication in a DDR provides disk-space savings but does not reduce the amount of data copied from backup clients. DDRs can also replicate data (in deduplicated form) to other DDRs across the LAN or WAN, great for offsite backups.
For an explanation of de-duplication, check out my prior post on the topic..
Two of the challenges we faced when designing the final solution had to do with the cost per TB of DataDomain disk and the slightly limited client OS support of PureDisk. But we had a clean slate to work from–there was no interest in utilizing any of the existing backup infrastructure aside from the two IBM tape libraries we had. We were not required to use the libraries but we wouldn’t be buying new ones if we planned on using tape as part of the new solution.
For the primary datacenter we deployed NetBackup Master and Media servers, a DataDomain DD690, and connected them to each other with Cisco 4900M 10gbps switches. We deployed a warm-standby master server plus a media server and another DD690 in our DR datacenter but did not use 10gbs there due to the additonal cost.
With this set up we covered all of the clients in our primary datacenter. Systems that have large amounts of data (like Microsoft Exchange, SAS Financials, etc) were connected directly to the 4900M switches (via 1gbps connections). Aggregate throughput of the backups during a typical night averages 400-500MB/sec with all of the data going to the DataDomain. The Exchange server’s flood their network links pushing over 100MB/sec per server when backing up the email databases. We currently back up 9TB of data per night with 3 media servers and a single DDR in about 5 hours. Our primary bottlenecks are with the VCB Proxy server (we need more of them) and the aging datacenter core network having an aggregate throughput of a barely more than 1gbps.
But what about those remote sites? What does OST really add? How do you tackle the NAS backups without resorting to tape? All that and more is coming up soon…