Monthly Archives: December 2010

Journey from Customer to Vendor: My EMC Experience…

Posted on by

A Year Ago Today:

2010 has been a year of significant change for me.  This time last year I was on my cell phone in the basement troubleshooting Exchange cluster problems at Nintendo.   Since I was one of only two people managing storage, replication, and backups there, I was perpetually on-call and as many of you who manage storage already know, the SAN (similar to the IP network) is the first to be blamed for application issues.

In January, many months of work designing and building a warm disaster recovery site culminated in a successful recovery test, proving the value of data de-duplication and SAN replication vs. tape backups.

A Career Change:

As February wrapped up I said goodbye to Nintendo after nearly 5 years there, and 12 total years working in internal IT, to make a significant change and become a Technology Consultant within EMC’s Telco, Media, and Entertainment division.

Moving from the customer side over to a manufacturer/vendor is a pretty big change.  I still have to deal with politics within IT projects, but the politics are different.  I still have to worry about financial concerns with IT projects, but those concerns are different.  I still work with customers, but they are external customers instead of internal customers.  For the customers I work with, I have become a knowledgeable consultant, a friend, and a scapegoat — anything they need me to be at the time.

My first 10 months at EMC have been a whirlwind tour.  In the midst of new hire training in Boston, followed by EMC World 2010, also in Boston, I began meeting with customers, attempting to learn about their business and environments.  Some customers want to tell you everything they can about their environment; others give up as little information as possible.

Phases of Transition:

I don’t know if this is typical of other people who move from being a customer to working for a vendor, but looking back I see distinct phases that I went through as I adjusted to this new career.

  • The Fire Hose Phase – For the first couple months, in addition to the new hire training and technical training, I had to learn how to use all of the internal tools, meet my customers, and try to glean as much information as possible about their IT infrastructures.  I took lots of notes and my Livescribe pen proved its worth in short order.
  • The Overcompensation Phase – My predecessor was well liked by customers and coworkers, so I set out to try and be as helpful as possible to try and build up a similar relationship with my customers.  This backfired in some ways, worked in others, and eventually taught me that I really should just focus on what my customers need and the rest will fall in place.
  • The Competency Phase – As I finally settled in to the new job and got comfortable I was able to start taking on more complex requests from customers. I had a better understanding of the capabilities of EMC products and how the capabilities really mapped to business problems.  At this point I had really figured out my role within EMC as well as with EMC’s customers.

Working for EMC:

Now, as I look back at the past 10 months at EMC, I’m amazed at what I was able to accomplish coming into a sales organization for the first time.  EMC has immense amounts of training available; and the people are all extremely helpful and forgiving.  One of the things that amazed me is how accessible everyone is for a 45,000-person company.  If I need detailed technical information on Symmetrix, I can email an engineer in Hopkinton, MA and within minutes get a very detailed reply, or in many cases a call back.  In the past 6 months, I’ve had Product Managers, VP’s, Engineers, and even technical folks from other divisions on the phone, after hours, helping me get information together for my customers.

While I was getting used to my new job, my wife and I had our first child in August and even though I’d only been with EMC for 6 months at the time, my management was so helpful, covering for me longer than they really needed to and ensuring that my workload was reasonable enough to manage as I adjusted to being a new father.

I even achieved EMC Proven Professional certification along the way.  EMC has a way of giving you the tools to succeed, and then allowing you to make the decision on how and whether to use them.  It’s a competitive environment in a very positive way, where everyone wants everyone else to be successful as opposed to succeeding at another’s peril.

Looking Forward:

As this 2010 year comes to an incredible close for myself, my division, and EMC as a whole, 2011 is shaping up to be great as well.  There are some changes coming on January 1st for my division that will affect me a little but I believe they will be positive changes overall.  Next year I hope to continue honing my skills as a blogger and in my official role as Technical Consultant.  Happy Holidays and New Year to you all.

Using Cloud as a SAN Tier?

Posted on by

I came across this press release today from a company that I wasn’t familiar with and immediately wanted more information.  Cirtas Systems has announced support for Atmos-based clouds, including AT&T Synaptic Storage.  Whenever I see these types of announcements, I read on in hopes of seeing real fiber channel block storage leveraging cloud-based architectures in some way.  So far I’ve been a bit disappointed since the closest I’ve seen has been NAS based systems, at best including iSCSI.

Cirtas BlueJet Cloud Storage Controller is pretty interesting in its own right though.  It’s essentially an iSCSI storage array with a cache and a small amount of SSD and SAS drives for local storage.  Any data beyond the internal 5TB of usable capacity is stored in “the cloud” which can be an onsite Private Cloud (Atmos or Atmos/VE) and/or a Public Cloud hosted by Amazon S3, Iron Mountain, AT&T Synaptic, or any Atmos-based cloud service provider.

Cirtas BlueJet

The neat thing with BlueJet is that it leverages a ton of the functionality that many storage vendors have been developing recently such as data de-duplication, compression, some kind of block level tiering, and space efficient snapshots to improve performance and reduce the costs of cloud storage.  It seems that pretty much all of the local storage (SAS, SSD, and RAM) is used as a tiered cache for hot data.  This gives users and applications the sense of local SAN performance even while hosting the majority of data offsite.

While I haven’t seen or used a BlueJet device and can’t make any observations about performance or functionality, I believe this sort of block->cloud approach has pretty significant customer value.  It reduces physical datacenter costs for power and cooling, and it presents some rather interesting disaster recovery opportunities.

Similar to how Compellent’s signature feature, tiered block storage, has been added to more traditional storage arrays, I think modified implementations of Cirtas’ technology will inevitably come from the larger players, such as EMC, as a feature in standard storage arrays.  If you consider that EMC Unified Storage and EMC Symmetrix VMAX both have large caches and block- level tiering today, it’s not too much of a stretch to integrate Atmos directly into those storage systems as another tier.  EMC already does this for NAS with the EMC File Management Appliance.

Conceptual Diagram

I can imagine leveraging FASTCache and FASTVP to tier locally for the data that must be onsite for performance and/or compliance reasons and pushing cold/stale blocks off to the cloud.  Additionally, adding cloud as a tier to traditional storage arrays allows customers to leverage their existing investment in Storage, FC/FCoE networks, reporting and performance trending tools, extensive replication options available, and the existing support for VMWare APIs like SRM and VAAI.

With this model, replication of data for disaster recovery/avoidance only needs to be done for the onsite data since the cloud data could be accessed from anywhere.  At a DR site, a second storage system connects to the same cloud and can access the cold/stale data in the event of a disaster.

Another option would be adding this functionality to virtualization platforms like EMC VPLEX for active/active multi-site access to SAN data, while only needing to store the majority of the company’s data once in the cloud for lower cost.  Customers would no longer have to buy double the required capacity to implement a disaster recovery strategy.

I’m eagerly awating the implementation of cloud into traditional block storage and I can see how some vendors will be able to do this easily, while others may not have the architecture to integrate as easily.  It will be interesting to see how this plays out.

EMC, Isilon, and CSX possibilities..

Posted on by

As you’ve no doubt heard, EMC has completed the tender offer to acquire Isilon (www.isilon.com)  for a Cajillion dollars (actually ~$2 Billion) and some people are asking why.  From where I sit, there are many reasons why EMC would want a company like Isilon, ranging from it’s media-minded customer base, to the technical IP, like scale-out NAS, that sets Isilon apart from the rest.

This EMC Press Release, as well as this one, and Chucks Blog are some of the many places to find out more about the acquisition…

I was thinking a lot about that technology as I worked on a high-bandwidth NAS project with a customer recently.  Isilon’s primary product is an IP-based storage solution that uses commodity based hardware components, combined with their proprietary OneFS Operating System, to deliver scale-out NAS with super simple management and scalability.  A single Isilon OneFS based filesystem can scale to over 10PB across hundreds of nodes.  Isilon also provides various versions of hardware that can be intermixed to increase performance, capacity, or both depending on customer needs.  You don’t necessarily have to add disks to an Isilon cluster to increase performance.

When looking at EMC’s own product line, you’ll find that Atmos delivers similar scale-out clustering for object-based storage, while VMAX does a similar type of scaling for high-end block storage (FC, FCoE, and iSCSI), and Greenplum provides scale-out analytics as well.  Line up Isilon’s OneFS, EMC GreenPlum, EMC Atmos, and EMC VMAX, and we can now deliver massive scale-out storage for database, object, file, and block data.  With VPLEX and Atmos, EMC also delivers block and object storage federation across distance.

Isilon’s OneFS also has technologies that mirror EMC’s but are implemented in such a way as to leverage the Scale-Out NAS model.  Take FlexProtect, for example, which is Isilon’s data protection mechanism (similar to RAID) and allows admins to apply different protection schemes (N+1 ala RAID5, N+2 ala RAID6, N+3, and even N+4 redundancy) on individual files and directories.  SmartPools, which is policy based, automatically tiers data at a file level based on read/write activity across different protection types and physical nodes, similar to how FASTVP tiers data at a block level on EMC Unified and VMAX.  Both EMC and Isilon realize that all data is not equal.

Rather than just repackage OneFS with an EMC logo (which I’m sure we’ll do at first), I wonder what else can we do with Isilon’s IP…

A recent series of blog posts by Steve Todd (Information Playground) on the topic of a Common Software Execution Environment (See CSX Technology and The Benefits of Component Assembly) got me thinking about deeper integration and how CSX can accelerate that integration.

For example…

What if EMC Engineering took the portions of code from Isilon’s OneFS that handle client load-balancing, file-level automated tiering, and flexible protection and turned them into CSX components.  Those components could be dropped into Celerra and immediately add Scale-Out NAS to EMC’s existing Unified storage platforms.  Or, imagine those components running directly in VMAX engines, providing scale-out NAS simultaneously with scale-out SAN across multiple, massive scale storage systems.  Combine the load balancing code and FlexProtect from Isilon with FASTVP in EMC Clariion to provide scale-out SAN in a midrange platform.

We could also reverse the situation and use the compression component that is in Clariion and Celerra, plus federation technology in Atmos, both added to OneFS in order reduce the storage footprint and extend Scale-Out NAS to many sites over any distance.  Add a GreenPlum component and suddenly you have a massive analytics cluster that spans multiple sites for data where you need it, when you need it.

The possibilities here really are endless, it will be very interesting to see what happens over the next 12 to 24 months.

Disclaimer: Even though I am an EMC employee, I am in no way involved in the EMC/Isilon acquisition, have no knowledge of future plans and roadmaps with regard to EMC and Isilon, and am not privy to any non-public information about this topic.  I am merely expressing my own personal views on this topic.

Can you Compress AND Dedupe? It Depends

Posted on by

My recent post about Compression vs Dedupe, which was sparked by Vaughn’s blog post about NetApp’s new compression feature, got me thinking more about the use of de-duplication and compression at the same time.  Can they work together?  What is the resulting effect on storage space savings?  What if we throw encryption of data into the mix as well?

What is Data De-Duplication?

De-duplication in the data storage context is a technology that finds duplicate patterns of data in chunks of blocks (sized from 4-128KB or so depending on implementation), stores each unique pattern only once, and uses reference pointers in order to reconstruct the original data when needed.  The net effect is a reduction in the amount of physical disk space consumed.

What is Data Compression?

Compression finds very small patterns in data (down to just a couple bytes or even bits at a time in some cases) and replaces those patterns with representative patterns that consume fewer bytes than the original pattern.  An extremely simple example would be replacing 1000 x “0”s with “0-1000”, reducing 1000 bytes to only 6.

Compression works on a more micro level, where de-duplication takes a slighty more macro view of the data.

What is Data Encryption?

In a very basic sense, encryption is a more advanced version of compression.  Rather than compare the original data to itself, encryption uses an input (a key) to compute new patterns from the original patterns, making the data impossible to understand if it is read without the matching key.

Encryption and Compression break De-Duplication

One of the interesting things about most compression and encryption algorithms is that if you run the same source data through an algorithm multiple times, the resulting encrypted/compressed data will be different each time.  This means that even if the source data has repeating patterns, the compressed and/or encrypted version of that data most likely does not.  So if you are using a technology that looks for repeating patterns of bytes in fairly large chunks 4-128KB, such as data de-duplication, compression and encryption both reduce the space savings significantly if not completely.

I see this problem a lot in backup environments with DataDomain customers.  When a customer encrypts or compresses the backup data before it gets through the backup application and into the DataDomain appliance, the space savings drops and many times the customer becomes frustrated by what they perceive as a failing technology.  A really common example is using Oracle RMAN or using SQL LightSpeed to compress database dumps prior to backing up with a traditional backup product (such as NetWorker or NetBackup).

Sure LightSpeed will compress the dump 95%, but every subsequent dump of the same database is unique data to a de-duplication engine and you will get little if any benefit from de-duplication.   If you leave the dump uncompressed, the de-duplication engine will find common patterns across multiple dumps and will usually achieve higher overall savings.  This gets even more important when you are trying to replicate backups over the WAN, since de-duplication also reduces replication traffic.

It all depends on the order

The truth is you CAN use de-duplication with compression, and even encryption.  They key is the order in which the data is processed by each algorithm.  Essentially, de-duplication must come first.  After data is processed by de-duplication, there is enough data in the resulting 4-128KB blocks to be compressed, and the resulting compressed data can be encrypted.  Similar to de-duplication, compression will have lackluster results with encrypted data, so encrypt last.

Original Data -> De-Dupe -> Compress -> Encrypt -> Store

There are good examples of this already;

EMC DataDomain – After incoming data has been de-duplicated, the DataDomain appliance compresses the blocks using a standard algorithm.  If you look at statistics on an average DDR appliance you’ll see 1.5-2X compression on top of the de-duplication savings.  DataDomain also offers an encryption option that encrypts the filesystem and does not affect the de-duplication or compression ratios achieved.

EMC Celerra NAS – Celerra De-Duplication combines single instance store with file level compression.  First, the Celerra hashes the files to find any duplicates, then removes the duplicates, replacing them with a pointer.  Then the remaining files are compressed.  If Celerra compressed the files first, the hash process would not be able to find duplicate files.

So what’s up with NetApp’s numbers?

Back to my earlier post on Dedupe vs. Compression; what is the deal with NetApp’s dedupe+compression numbers being mostly the same as with compression alone?  Well, I don’t know all of the details about the implementation of compression in ONTAP 8.0.1, but based on what I’ve been able to find, compression could be happening before de-duplication.  This would easily explain the storage savings graph that Vaughn provided in his blog.  Also, NetApp claims that ONTAP compression is inline, and we already know that ONTAP de-duplication is a post-process technology.  This suggests that compression is occurring during the initial writes, while de-duplication is coming along after the fact looking for duplicate 4KB blocks.  Maybe the de-duplication engine in ONTAP uncompresses the 4KB block before checking for duplicates but that would seem to increase CPU overhead on the filer unnecessarily.

Encryption before or after de-duplication/compression – What about compliance?

I make a recommendation here to encrypt data last, ie: after all data-reduction technologies have been applied.  However, the caveat is that for some customers, with some data, this is simply not possible.  If you must encrypt data end-to-end for compliance or business/national security reasons, then by all means, do it.  The unfortunate byproduct of that requirement is that you may get very little space savings on that data from de-duplication both in primary storage and in a backup environment.  This also affects WAN bandwidth when replicating since encrypted data is difficult to compress and accelerate as well.

Small Innovations Can Make a Big Difference

Posted on by

On Friday, my local gas/electric utility decided it was time to replace the gas meter and 40-year old steel gas pipe between the street and my house.  I had a chance to chat with the guys a bit while they were working and I learned about a small little innovation that not only makes their work easier, it provides better uptime for natural gas customers, and most likely saves lives.

It all started when I looked out the window and saw the large hole they’d jackhammered into my driveway.  At first I was a little worried about the jackhammer hitting the gas line but I they do 2-3 of these a day so I figure they must know what they are doing.  Then I saw them welding–in the hole!  And it turns out that they were literally welding ON the gas line.  So I naturally asked, “so you had to turn off the gas to whole street to do this?” to which they replied “nope, the gas is still flowing in there.”  Now some of you may know how this is achieved without large fireballs in peoples’ front yards but I was a little stunned at first.  So they explained the whole deal.  It turns out that the little innovation that allows them to weld a new pipe onto an in-service gas line is called a hot tap.  Actually a hot tap is made with several components– a flange, a valve, a few other accessories, and a hot tapping machine.

I couldn’t find a picture that showed the same hot tapping valve they used on my gas line but the following picture from http://www.flowserve.com gives you an idea of what it does…

 

Flowserve "NAVAL" Hot Tapping Valve

 

One line shows a completed hot tap in service, and the other shows the hot-tapping tool inserted with a hand drill to drive the cutter.

Basically, they weld the valve onto an existing pipe, along with a flange to better match the contours and add some “meat” to the fitting.  In the case of this picture, the hot tapping machine is inserted through the valve, sealing the opening in the valve itself, and the drill turns a magnetic cutter to cut into the working gas line.  The magnetism helps to retrieve the metal shavings from the cut.

Once the hole is complete, the hot tapping machine is backed out a bit, the valve is closed, and the machine is completely removed.  After that, you can attach a new pipe to the valve and open it up whenever you are ready.

The Pilchuck crew that was working on my line had an even fancier valve with a knob on top and a built-in cutter.  So after they welded it on, they just screwed it down to cut the hole and unscrewed once they attached the branch line.  Pretty slick since they didn’t need a separate tool to do the cut.

I was thinking about this whole process the next day and it occurred to me just how dangerous it would be to tap live gas lines.  And how the idea of a hot tap is really pretty simple, but it probably saves lives.  It also keeps service up for every other customer who shares the main pipeline while maintenance is performed, and I’m pretty sure it speeds up the work significantly over shutting down a gas line to cut it and inserting a T-fitting.

While I was looking for a suitable picture I found out that they do this same thing with large continental pipelines as well.  There are companies that will hot yap pipes over 100″ in diameter.

This is totally unrelated to storage but I thought it was interesting.

Compression better than Dedup? NetApp Confirms!

Posted on by

The more I talk with customers, the more I find that the technical details of how something works is much less important than the business outcome it achieves.  When it comes to storage, most customers just want a device that will provide the capacity and performance they need, at a price they can afford–and it better not be too complicated.  Pretty much any vendor trying to sell something will attempt to make their solution fit your needs even if they really don’t have the right products.  It’s a fact of life, sell what you have.  Along these lines, there has been a lot of back and forth between vendors about dedup vs. compression technology and which one solves customer problems best.

After snapshots and thin provisioning, data reduction technology in storage arrays has become a big focus in storage efficiency lately; and there are two primary methods of data reduction — compression and deduplication.

While EMC has been marketing compression technology for block and file data in Celerra, Unified, and Clariion storage systems, NetApp has been marketing deduplication as the technology of choice for block and file storage savings.  But which one is the best choice?  The short answer is.. it depends.  Some data types benefit most from deduplication while others get better savings with compression.

Currently, EMC supports file compression on all EMC Celerra NS20, 40, 80, 120, 480, 960, VG2, and VG8 systems running DART 5.6.47.x+ and block compression on all CX4 based arrays running FLARE30.x+.  In all cases, compression is enabled on a volume/LUN level with a simple check box and processing can be paused, resumed, and disabled completely, uncompressing the data if desired.  Data is compressed out-of-band and has no impact on writes, with minimal overhead on reads.  Any or all LUN(s) and/or Filesystem(s) can be compressed if desired even if they existed prior to upgrading the array to newer code levels.

With the release of OnTap 8.0.1, NetApp has added support for in-line compression within their FAS arrays.  It is enabled per-FlexVol and as far as I have been able to determine, cannot be disabled later (I’m sure Vaughn or another NetApp representative will correct me if I’m wrong here.)  Compression requires 64-bit aggregates which are new in OnTap 8, so FlexVols that existed prior to an upgrade to 8.x cannot be compressed without a data migration which could be disruptive.  Since compression is inline, it creates overhead in the FAS controller and could impact performance of reads and writes to the data.

Vaughn Stewart, of NetApp, expertly blogged today about the new compression feature, including some of the caveats involved, and to me the most interesting part of the post was the following graphic he included showing the space savings of compression vs. dedup for various data types.

Image Credit: Vaughn Stewart, NetApp

The first thing that struck me was how much better compression performed over deduplication for all but one data type (Virtualization will usually fare well because in a typical environment there are many VMs with the same operating system files).  In fact, according to NetApp, deduplication achieves very little savings, if any, for the majority of the data types here.
 
The light green bar indicates savings with both dedupe AND compression enabled on the same dataset.  In 5 out of 9 cases, dedup adds ZERO savings over compression alone.  I can’t help but wonder why anyone would enable dedup on those data types if they already had compression, since both features use storage array CPU resources to find and compress or dedup data.  I am aware that in some cases, dedup can improve performance on NetApp systems due to dedup-aware cache, but I also believe that any performance gain is directly related to the amount of duplication in the data.  Using this chart, virtualization is really the only place where dedup seems particularly effective and hence the only place where real performance gains would likely present themselves.
 
The challenge for NetApp customers will be getting their data into a configuration that supports compression due to the 64-bit aggregate requirement, lack of an easy and non-disruptive LUN migration feature (DataMotion appears to only support iSCSI and NFS and requires several additional licenses), and no way to convert an aggregate from 32-bit to 64-bit.  Once compression has been enabled, if there is truly no way to disable it, any resulting performance impact will be very difficult to rectify.
 
On the other hand, any EMC customer with current maintenance can upgrade their NS or CX4 array to newer versions of DART or FLARE, and compression can be enabled on any existing data after the fact.  If performance becomes an issue for a particular dataset once compressed, the data can be uncompressed later.  Both operations are completely non-disruptive and run in the background.  While block compression only works on LUNs in a virtual pool, as opposed to a traditional RAID group, enabling compression on a normal LUN will automatically migrate the LUN into a virtual pool, perform zero-page reclaim, followed by compression, and the entire process is completely non-disruptive to the application.  Oh, and compressed data can still be tiered with FASTVP across SSD, FC, and SATA disk and/or benefit from up to 2TB of FASTCache.
 
I admit that there is a place for deduplication as well as compression in reducing the footprint of customer data.  However, based on what I’ve seen in my career as an IT professional, and with my customers in my current role at EMC, there are more use cases for compression than there are for deduplication when it comes to primary data, whether SAN or NAS.  Either way, if I was using a new technology for the first time on a particular data set, whether compression or deduplication, I would definitely want a backout plan in case the drawbacks outweight the benefits.