Blog Archives

Building Blocks – Part II: The Building Block Approach to the #PrivateCloud

Posted on by

Build your own Private Cloud <- Part 2 -> How many Building Blocks

Since server virtualization abstracts the physical hardware from the operating systems and applications, essential for Cloud Infrastructures (also known as Infrastructure-as-a-Service), it’s ideally suited for breaking down the physical infrastructure into Building Blocks. Put simply, Building Blocks are repeatable, pre-designed mixes of storage, CPU, and memory.

There are several advantages to the Building Block approach that I’ll point out here:

  1. Rather than dropping a huge amount of capital up front on the entire infrastructure you need over the long haul, some of which will not be used at first, you can start with a smaller capital outlay today, then make multiple similarly small capital purchases only as needed. Further, when the hardware in a single Building Block reaches the end of its life (for any number of reasons), only that one Building Block will need to be refreshed at that time rather than a wholesale replacement of the entire environment.
  2. In an environment where virtualization is a new endeavor, sizing the compute, memory, and storage required is really an educated guess. As each Building Block is consumed, the real-world performance can be analyzed and adjusted for future Building Blocks to more closely match your specific workload.
  3. Building Blocks are inherently isolated which creates natural performance and availability boundaries. This can be leveraged for web and application server farms by spreading nodes of each farm across multiple Building Blocks. In the event of a catastrophic failure of one Building Block, due to major software bug affecting the cluster or the failure of an entire storage array for some reason, nodes of the server farm not hosted on the failed Building Block will be unaffected.
  4. The list price for storage arrays and servers goes down over time. If your growth is similar to many of my customers, where full build out of the physical infrastructure will not be required until 2-3 years after the start of the project, the acquisition cost of each individual Building Block will decrease over time, saving you money overall.
  5. In many cases, and due to a variety of factors, the cost to upgrade a storage array is higher than the cost to purchase the capacity with a new array. Upgrades also add complexity, complicate asset depreciation, and warranty renewals. The Building Block approach eliminates the majority of upgrades and the associated complexity.

Each Building Block can be maintained in its original build state or upgraded independent of the other building blocks so, for example, you don’t have to worry about upgrading every server in your datacenter with new HBA drivers if you decide to upgrade the storage array firmware on one array. You would only need to upgrade the servers in that arrays’ Building Block.

You may be thinking that your environment is not large enough to use a Building Block approach, but the more I worked on this project, the more I realized that Building Blocks can be adjusted to fit even very small environments. I’ll go into that a bit more later.

Build your own Private Cloud <- Part 2 -> How many Building Blocks

Building Blocks – Part I: Build your own #PrivateCloud

Posted on by

Part 1 -> The Building Block Approach

As 2011 wraps up and I have a little time at home over the holidays, I’ve been reflecting on some of the customer projects I’ve worked on over the past year. Cloud computing and EMC’s vision for the “Journey to the Private Cloud” have been hot topics this year and of the various projects I’ve worked on this past year, one stands out to me as something that could be used as a blueprint for others who want to deploy their own Private Cloud but may not know how to start.

I have been working with a customer with approximately 10,000 servers that support their business and for all intents had zero virtualization as recent as 2010.  As most customers already know, they thought it would be good to begin virtualizing their environment to drive up asset utilization and flexibility while bringing down costs.  In the past, they’ve experimented with multiple server virtualization solutions (such as VMWare ESX and Microsoft Hyper-V) with limited success and had all but abandoned the idea.  A change in leadership in late 2010 brought a top-down initiative to virtualize wherever possible, but in order to instill confidence in virtualized environments within the various business units, the virtual infrastructure needed to be reliable and performant.

The customer spent the latter half of 2010 looking at their existing physical environment, finding that about 80% of the 10,000 servers were various application, file, and web servers; the remaining 20% being various database servers (mostly MS SQL).  Moving an infrastructure this large into a Private Cloud model would take several years and, further adding to the challenge, the DBA teams were particularly wary about virtualizing their database servers.  That said, the newly formed Virtualization and Cloud team set a goal of virtualizing the approximately 8,000 non-database servers over 36 months, starting out with dev/test and gradually adding production and tier-1 applications until only the database servers remained on physical infrastructure.  They believe that if they prove success with virtualization during this first 3 years, the DBAs will be more willing to begin virtualizing their systems, plus there should be more knowledge and tools in the public domain for managing virtual database instances by then.

To accomplish all of their goals, the customer leveraged some experience that individual team members had gained from prior environments to come up with a Building Block based deployment.  I worked with them to finalize the design and sizing for the each Building Block and throughout the year have helped analyze the performance of the deployed infrastructure to help determine how the Building Blocks can be optimized further.  Through the next several posts, I will explain the Building Block approach, detailing the benefits, some of the considerations, and some thoughts around sizing.  I hope that this information will be useful to others.  The content is mostly vendor agnostic except for some example data that uses EMC specific storage best practices.

Part 1 -> The Building Block Approach

A little something to honor the 10th anniversary of 9/11…

I remember…

I remember the day when the news broke, and I stayed awake watching the television for days.

I remember the day when airplanes filled with innocent people hit the twin towers, and the Pentagon, and nearly the White House.

I remember the day when I watched live on television, as Americans of all races and religions ran for their lives, and Heroes of all races and religions ran into the chaos to help

I remember the day when thousands of my fellow citizens, humans, people, and peers, suddenly lost their lives.

I remember the day when two buildings fell, and the whole world rose up in disgust
against evil in any form.

I remember the day when we proved that Freedom, Heroism, and Liberty cannot be buried in the rubble of a few buildings.

What do you remember?

Defining RTO and RPO for your data…

Posted on by

Do you have a clearly defined Recovery Point Objective (RPO) for your data?  What about a clearly defined Recovery Time Objective (RTO)?

One challenge I run in to quite often is that, while most customers assume they need to protect their data in some way, they don’t have clear cut RPO and RTO requirements, nor do they have a realistic budget for deploying backup and/or other data protection solutions.  This makes it difficult to choose the appropriate solution for their specific environment.  Answering the above questions will help you choose a solution that is the most cost effective and technically appropriate for your business.

But how do you answer these questions?

First, let’s discuss WHY you back up… The purpose of a backup is to guarantee your ability to restore data at some point in the future, in response to some event.  The event could be inadvertent deletion, virus infection, corruption, physical device failure, fire, or natural disaster.  So the key to any data protection solution is the ability to restore data if/when you decide it is necessary.  This ability to restore is dependent on a variety of factors, ranging from the reliability of the backup process, to the method used to store the backups, to the media and location of the backup data itself.  What I find interesting is that many customers do not focus on the ability to restore data; they merely focus on the daily pains of just getting it backed up.  Restore is key! If you never intend to restore data, why would you back it up in the first place?

What is the Risk?

USA Today published an article in 2006 titled “Lost Digital Data Cost Businesses Billions” referencing a whole host of surveys and reports showing the frequency and cost to businesses who experience data loss.

Two key statistics in the article stand out.

  • 69% of business people lost data due to accidental deletion, disk or system failure, viruses, fire or another disaster
  • 40% Lost data two or more times in the last year

Flipped around, you have at least a 40% chance of having to restore some or all of your data each year.  Unfortunately, you won’t know ahead of time what portion of data will be lost.  What if you can’t successfully restore that data?

This is why one of my coworkers refuses to talk to customers about “Backup Solutions”, instead calling them “Restore Solutions”, a term I have adopted as well.  The key to evaluating Restore Solutions is to match your RPO and RTO requirements against the solution’s backup speed/frequency and restore speed respectively.

Recovery Point Objective (RPO)

Since RPO represents the amount of data that will be lost in the event a restore is required, the RPO can be improved by running a backup job more often.  The primary limiting factor is the amount of time a backup job takes to complete.  If the job takes 4 hours then you could, at best, achieve a 4-hour RPO if you ran backup jobs all day.  If you can double the throughput of a backup, then you could get the RPO down to 2 hours.  In reality, CPU, Network, and Disk performance of the production system can (and usually is) affected by backup jobs so it may not be desirable to run backups 24 hours a day.  Some solutions can protect data continuously without running a scheduled job at all.

Recovery Time Objective (RTO)

Since RTO represents the amount of time it takes to restore the application once a recovery operation begins, reducing the RTO can be achieved by shortening the time to begin the restore process, and speeding up the restore process itself.  Starting the restore process earlier requires the backup data to be located closer to the production location.  A tape located in the tape library, versus in a vault, versus at a remote location, for example affects this time.  Disk is technically closer than tape since there is no requirement to mount the tape and fast forward it to find the data.  The speed of the process itself is dependent on the backup/restore technology, network bandwidth, type of media the backup was stored on, and other factors.  Improving the performance of a restore job can be done one of two ways – increase network bandwidth or decrease the amount of data that must be moved across the network for the restore.

This simple graph shows the relationship of RTO and RPO to the cost of the solution as well as the potential loss.The values here are all relative since every environment has a unique profit situation and the myriad backup/restore options on the market cover every possible budget.

Improving RTO and/or RPO generally increases the cost of a solution.  This is why you need to define the minimum RPO and RTO requirements for your data up front, and why you need to know the value of your data before you can do that.  So how do you determine the value?

Start by answering two questions…

How much is the data itself worth?  

If your business buys or creates copyrighted content and sells that content, then the content itself has value.  Understanding the value of that data to your business will help you define how much you are willing to spent to ensure that data is protected in the event of corruption, deletion, fire, etc.  This can also help determine what Recovery Point Objective you need for this data, ie: how much of the data can you lose in the event of a failure.

If the total value of your content is $1000 and you generate $1 of new content per day, it might be worth spending 10% of the total value ($100) to protect the data and achieve an RPO of 24 hours.  Remember, this 10% investment is essentially an insurance policy against the 40% chance of data loss mentioned above which could involve some or all of your $1000 worth of content.  Also keep in mind that you will lose up to 24 hours of the most recent data ($1 value) since your RPO is 24 hours.  You could implement a more advanced solution that shortens the RPO to 1 hour or even zero, but if the additional cost of that solution is more than the value of the data it protects, it might not be worth doing.  Legal, Financial, and/or Government regulations can add a cost to data loss through fines which should also be considered.  If the loss of 24 hours of data opens you up to $100 in fines, then it makes sense to spend money to prevent that situation.

How much value does the data create per minute/hour/day?

Whether or not your data itself has value on it’s own, the ability to access it may have value.  For example, If your business sells products or services through a website and a database must be online for sales transactions to occur, then an outage of that database causes loss of revenue.  Understanding this will help you define a Recovery Time Objective, ie: for how long is it acceptable for this database to be down in the event of a failure, and how much should you spend trying to shorten the RTO before you get diminishing returns.

If you have a website that supports company net profits of $1000 a day,  it’s pretty easy to put together an ROI for a backup solution that can restore the website back into operation quickly.  In this example, every hour you save in the restore process prevents $42 of net loss.  Compare the cost of improving restore times against the net loss per hour of outage.  There is a crossover point which will provide a good return on your investment.

Your vendor will be happy when you give them specific RPO and RTO requirements.

Nothing derails a backup/recovery solution discussion quicker than a lack of requirements.  Your vendor of choice will most likely be happy to help you define them but it will help immensely if you have some idea of your own before discussions start.  There are many different data protection solutions on the market and each has it’s own unique characteristics that can provide a range of RPO and RTO’s as well as fit different budgets. Several vendors, including EMC, have multiple solutions of their own — one size definitely does not fit all.  Once you understand the value of your data, you can work with your vendor(s) to come up with a solution that meets your desired RPO and RTO while also keeping a close eye on the financial value of the solution.

Does EMC FASTCache work with Exchange?

Posted on by

Short Answer: Yes!

In my dealings with customers I’ve been requesting performance data from their storage systems whenever I can to see how different applications and environments react to new features. Today I’m going to give you some more real-world data, straight from a customer’s production EMC NS480.

I’ve pulled various stats out of Analyzer for this customer’s Exchange server, which has 3 mail databases totaling about 1TB of mail stored on the NS480 via FibreChannel connect. Since this customer is not extremely large (similar to most of our customers) they are using this NS480 for pretty much everything from VMWare, SQL, and Exchange, to NAS, web/app content, and Business Intelligence systems. There is about 30TB of block data and another 100TB of NAS data. FASTCache is enabled for all LUNs and Pools with just 183GB of usable FASTCache space (4 x 100GB SSDs). So in this environment, with a modest amount of FASTCache and very mixed workload, how does Exchange fare?

Let’s first take a look at the Exchange workload itself for a 24 hour period: (Note: There were no reads from the Exchange log LUNs to speak of so I left that out of this analysis.)

Total Read IOPS for the 3 databases: (the largest peak is a result of database maintenance jobs and the smaller peaks are due to backup jobs) Here it’s tough to see due to the maintenance and backup peaks, but production IO during the work day is about 200-400IOPS. By the way, a source-deduplicating incremental-forever backup technology, such as Avamar, could drastically reduce the IO Load and duration of the nightly backup

Total Write IOPS for the 3 databases: Obviously more changes to the database occurring during the work day.

Total Write IOPS for the 3 Log files: Log data is typically cached easily in the SP cache so FAST Cache isn’t terribly required here but I’m including it to show whether there is any value to using FASTCache with Exchange logs.

Now let’s look at the FASTCache hit ratios for this same set of data: (average of all 3 DBs)

First, the Read Activity: Here you can see that aside from the maintenance and backup jobs, FASTCache is servicing 70-90% of the Read IOPs. Keep in mind that a FASTCache miss could still be a Cache Hit if the data is in SP Cache. What’s interesting about this is that it looks like the nightly maintenance job is pushing the highest load.

And the Write Activity: The beauty of EMC’s FASTCache implementation being a read/write cache, the benefit extends beyond just read IO. Here you see that FASTCache is servicing 60-80% of the writes for these Exchange Databases. That’s a huge load off the backend disks.

And the Log Writes: Since Log writes are usually not a performance problem, I would say that FASTCache is not necessary here, and the average 30% hit ratio shown here is not great. If you wanted to spend the time to tune FASTCache a bit, you might consider disabling FASTCache for Log LUNs to devote the FASTCache capacity to more cache friendly workloads.

All in all you can see that for the database data, FASTCache is servicing a significant portion of the user generated workload, reducing the backend disk load and improving overall performance.

Hopefully this gives you a sense of what FASTCache could do for your Exchange environment, reducing backend disk workload for reads AND writes. I must reiterate, since an SP Cache hit is shown as a FASTCache miss, an 80% FASTCache hit ratio does not mean that 20% of the IOs are hitting disk. To illustrate this, I’ve graphed the sum of SP Cache Hits and FAST Cache Hits for a single database. You can see that in many cases we’re hitting a total of 100% cache hits.

Most interesting is the backup window where SP Cache is really handling a huge amount of the load. This is actually due to the Prefetch algorithms kicking in for the sequential read profile of a backup, something CX/VNX is very good at.

2011 Memorial Day Camping – Trip and Camping Gear reviews…

Posted on by 1 comment

For the past 13 years, I’ve organized a group camping trip on Memorial Day weekend.  For the most recent 5 years or so we’ve also made the camping trip into a celebration and fundraiser for the charity that my wife and I founded (www.ctyl.org).  Every year I look at the available Washington State Parks on the east side of the Cascades for a group site that can accommodate up to 40 people, has a body of water nearby, and generally looks nice.  We go back to sites we like periodically as well.  Prior to 2011, we had camped at the following locations, some of them several times:

  • Alta Lake State Park
  • Lake Wenatchee State Park
  • 25-Mile Creek Campground (on Lake Chelan)
  • Perrygin Lake State Park
  • Lincoln Rock State Park (on Lake Entiat)

This year I decided to try Sun Lakes/Dry Falls State Park near Coulee City and aside from some variable weather that is always a bit of a challenge on Memorial Day Weekend, this location delivered a lot.

Dry Falls

First, the scenery is amazing and the group site, which is on a little hill between the main campground and the RV sites, is situated perfectly to take in the scenery right from your tent or picnic table.  Within walking distance of the campsite, there are miles of trails, a swimming beach, kids play area with climbing toys, a 9-hole golf course (Vic Meyers Golf Course), an 18-hole mini golf course, water balloon battle facility, water skiing, fishing, and paddle boating.  With a short drive (5-30 minutes depending) you can visit Lake Lenore Caves, Dry Falls Visitor Center, Grand Coulee Dam, and several different lakes for more fishing and boating opportunities.  For the 2011 Memorial Day weekend, the weather held to around 65-75 degrees during the day, two short rain periods (30 minutes each) came through, and on Sunday morning from about Midnight till 9am we experience very strong winds (30-40+ MPH) which toppled a tent and a screened shelter, and flattened several other tents.  The rest of the time it was sunny and nice.  At night it was pretty cold so heavy blankets/sleeping bags are a must.  Weather at Sun Lakes is typically very nice during the main part of summer (July/Aug/Sep) with average temps of 85 degrees during the day.

Speaking of winds and tents, I took some pictures of various tents that we had this year and how they were faring during the wind storm.  Most interesting was the two versions of REI Hobitat 6 tents that were next to each other.

Old REI Hobitat 6 in front, new version of Hobitat 6 behind

The older version (closest to camera) could not handle the wind even with all of the guy wires staked out for support.  The newer version held up just fine without any support lines.  Our screened shelter started to fall apart because we hadn’t properly secured it but once we staked down the support wires it stood its ground.  Our new, huge, tent held up great in the wind, except for the ground stakes that were included.  We had to switch to different ground stakes which worked much better.

Coleman WeatherMaster 10 16x8 3-Room Tent (Model # 2000008678)

This Coleman WeatherMaster 10 is a special Costco Only version based on the WeatherMaster 6 I believe.  The 6 has a screened porch while this Costco model had solid nylon to close the screens making the porch into a 3rd room.  It’s 16×10 feet in size and has near vertical walls on all 4 sides.  The hinged door is way more handy than you’d think it would be and it barely moved in the strong winds.  It’s huge inside with room for porta-crib, dog bed, bags and 2 queen air mattresses without trying very hard.  There were no rain leaks and it was easy to set up.  However it’s quite heavy to pack (about 50lbs) and it takes 20 minutes to assemble.  If you think you will see any wind, scrap the included tent stakes and buy the Coleman 9 inch ABS plastic stakes which are about $3 for 6.  You’ll need 22 ground stakes for this tent with the rain fly.  Make sure to bring a hammer or mallet and sink the ground stakes as far down into the ground as possible and at a slight angle (top of stake pointing away from the tent).

Coleman Tent Stakes ABS 9" (Model # 2000003425)

One thing to note… this WeatherMaster 10 tent is not the same one that Coleman lists on their website or that you’d find if you Google’d for it and based on the reviews I’ve read of that tent it’s a good thing.  The normal WM10 has angled walls on the ends that make it hard to stand up near the ends; the Costco version has much more standing room.  Costco sells this tent for $143 right now which is a screamin deal.

Several years ago I picked up a Coleman Tent Light which mounts to the inside wall or ceiling of the tent using a magnet with a metal plate on the outside.  It has been really handy and works with pretty much any tent.

Coleman Tent Light (Model# 2000000032)

Last year, we also replaced our leaking air bed with the queen sized Coleman Quickbed.  There are several versions of this with varying thicknesses and some with built-in speakers for MP3 players, others with attached carrying bags.  Regardless of which one you choose, the primary reason thing you need to look for is the built-in battery powered air pump.  You might think that you can use any pump to blow up your air mattress, and you’d be right, but having one built-in to the mattress provides several benefits.

  1. You don’t need to remember where you put your pump when you want to use the air mattress.
  2. You have a valid excuse for not letting other people borrow your air pump.
  3. In the middle of the night, when the air temperature has dropped and the air mattress pressure has dropped as a result of the denser air, you can reach over your pillow, turn a knob on the mattress, and pump it right back up without leaving your sleeping bag.

This air bed is one of the best things we’ve ever purchased for camping in my opinion.

Coleman Quickbed with MP3 Speakers and Built-In 4D Pump Queen

This year my wife has been experimenting and blogging about make-ahead cooking and she decided to apply it to camping.  So instead of bringing raw ingredients and preparing everything at the campsite, all of our meals were prepared ahead of time in various ways, some cooked and frozen, others chopped and ready for cooking, etc.  This made meal time quicker, easier, and tastier and also made clean up easier.  To cook the food we brought the usual two-burner propane stove (mine is an Edmund Hillary brand I’ve had for many years) and a griddle that fits perfectly on the stove.  For cookware we brought our Magma Nestable Non-Stick Stainless Steel set.  We originally bought this set for our boat and realized its size makes it perfect for these types of camping trips.  The set is definitely not light enough to pack in a backpack, but otherwise it’s awesome.  It cleans up easy and has pretty much every type of stove top pot/pan you need.  You can get this set at many marine supply stores or online at Amazon which has it for just about $200.

Magma Nestable Non-Stick Stainless Steel Cookware (10 piece)

While picking up our tent at Costco, we also noticed the Coleman All-in-One Cooking system and decided to buy it.  It’s a stove that can also be made into a grill or griddle and it comes with a stock pot that acts like a slow cooker.  It’s actually a pretty nice setup and Costco’s price can’t be beat.  We used all the modes and it worked quite well with one exception.  The slow cooker was a tad too hot even when the burner was on low so our chili kept boiling a little when we wanted it to just stay warm.  Other than that, a pretty sweet kit.  The kit includes the Coleman Insta-Start stove plus accessories that are normally optional but the kit’s price is lower.  It appears that it is only available as a complete kit at Costco, Sams Club, and Camping World and Costco’s price was $99.

Coleman All-In-One Cooking System (Model # 2000003609)

Another Costco purchase was the pack of three LED Flashlights.  They are TechLite Lumen Master flashlights with 150 lumens of output and run on 3 AAA batteries.  Online the price seems to be about $30 for the set of 3 but Costco had them for $19.99.  These are the brightest flashlights I’ve used, LED or not, period.  Totally worth the money and they are rugged aluminum and fit in your pocket.

TechLite Lumen Master CREE LED

Well, that wraps up this post.  I hope this is helpful to anyone looking for some camping ideas.

Find your busiest LUNs Fast with Unisphere Analyzer Search

Posted on by

One of the features that has been added to Analyzer (Navisphere and Unisphere) in recent versions is the ability to search for specific LUNs based on criteria.  This feature is actually pretty powerful because the criteria itself is pretty flexible.  For example, you can search for all LUNs attached to a specific host, or with a specific set of characters in the LUN name.  In addition you can search against performance metrics like Throughput, Response Time, or LUN Utilization.  This is where it gets interesting because you can look for poorly performing LUNs really quickly.  In the following example, I am going to build a search that looks for LUNs that have EX in the name (since all of my Exchange server LUNs have EX in the name) that ALSO have high LUN utilization for several polling intervals.

Once you’ve launched Analyzer and opened an Archive, click on the binocular icon in the tool bar to bring up the search dialog. 

You can choose a predefined search (a search you previously created and saved) or a new Object Based Query.  In this example we are going to build a new query so select “Object Based Query” and choose All LUNs in the drop down box.  If you wanted, you could narrow down the search to just Pool Based LUNs, just MetaLUNs, or Component LUNs, etc.)

Next we’ll define the LUN criteria by selecting the Name property, choosing Contains, and entering the “EX” value.  This will filter the search to only those LUNs that have EX in the name.  Finally we’ll set a threshold.  In this example, I’m looking for LUNs that have a LUN Utilization value over greater than 90% for at least 10 polling samples.  I could add more LUN criteria and/or more thresholds to further narrow down the results with AND or OR combinations.

Optionally, you can save the query so that it will be listed in the “Predefined Query” list in the future.  Click Search and set or edit the name of the search.

After clicking OK, Analyzer will create a new tab and populate the results of the search.  Once the search is complete you can graph metrics for the LUNs like normal.  Here I’ve selected Utilization to show why this LUN matched the search criteria — note the high utilization between 2am and 7am.

You can get much more granular with your searches if you are looking for something specific, or use metrics like Response Time to look for poorly performing LUNs attached to a specific server.  It’s pretty flexible.  I started using the search feature recently and thought others might be interested in it.  Try it out and let me know what you think.

Performance Analysis for Clariion and VNX – Part 5 (FASTCache)

Posted on by

<< Back to Part 4 — Part 5 — Go to Part 6 >>

Sorry for the delay on this next post..  Between EMC World and my 9 month old, it’s been a battle for time…

Okay, so you have an EMC Unified storage system (Clariion, Celerra, or VNX) with FASTCache and you’re wondering how FASTCache is helping you.  Today I’m going to walk you through how to tease FASTCache performance data out of Analyzer.

I’m assuming you already have Analyzer launched and opened a NAR archive.  One thing to understand about Analyzer stats as they relate to FASTCache, is that stats are gathered at the LUN level for traditional RAID Group LUNs, but for Pool based LUNs, the stats are gathered at the pool level.  As a result graphing data for FASTCache differs for the two scenarios.

First we’ll take a look at the overall array performance.  Here we’ll see how much of the write workload is being handled by FASTCache.  In the SP Tab of Analyzer, select both SPs (be sure no LUNs or other objects are selected).  Select Write Throughput (IO/s), and then click the clipboard icon (with I’s and O’s).

Launch Microsoft Excel and paste into the sheet, and then perform the text-to-column change discussed in the previous post if necessary.

Next create a formula in the D column, adding the values for both SPs into a single total.  We’re not going to graph it quite yet though.

Back in Analyzer, deselect the two SPs, switch to the Storage Pool Tab, right-click on the array and choose Select All -> LUNs, then Select All -> Pools.

Click on a RAID Group LUN or Pool in the tree, it doesn’t matter which one, deselect Write Throughput (IO/s) and select FAST Cache Write Hits/s.  In a moment, you’ll end up with a graph like this.

Click the clipboard icon again to copy this data and paste it into a new sheet of the same workbook in Excel.  Insert a blank column between column A and B, then create a formula to add the values from column B through ZZ (ie: =SUM(C2:ZZ2).

Then copy that formula and paste into every row of column B.  This column will be our Total FAST Cache Write Hits for the whole array.  Finally, click the header for Column B to select it, then copy (CTRL-C).  Back to the first sheet — Paste the “Values” (123 Icon) into Column E.

Now that we have the Total Write IOPS and Total FAST Cache Write Hits in adjacent columns of the same worksheet, we can graph them together.  Select both columns (D and E in my example), click Insert, and choose 2D Area Chart.  You’ll get a nice little graph that looks something like the following.

Since it’s a 2D Area Chart, and not a stacked graph, the FASTCache Write IOPS are layered over the Total Write IOPS such that visually it shows the portion of total IOPS handled by FASTCache.  Follow this same process again for Read Throughput and FASTCache Read Hits.  Furthur manipulation in Excel will allow you to look at total IOPS (read and write) or drill down to individual Pools or RAID Group LUNs.

Another thing to note when looking at FASTCache stats…  FAST Cache Misses are IOPS that were not handled by FASTCache, but they may still have been handled by SP Cache.  So in order to get a feel for how many read IOs are actually hitting the disks, you’d actually want to subtract SP Read Cache Hits and Total FASTCache Read Hits (calculated similar to the above example) from SP Read Throughput.  This is similar for Write Cache Misses as well.

I hope this helps you better understand your FASTCache workload.  I’ll be working on FASTVP next, which is quite a bit more involved.

<< Back to Part 4 — Part 5 — Go to Part 6 >>

EMCWorld 2011 Announcements

Posted on by

After flying into Las Vegas this morning, checking in to the Cosmopolitan Hotel with 2000+ other EMCer’s, and taking the bus to the Venetian, I’m finally settling in a bit.  I watched Pat Gelsinger’s keynote via recorded video on Facebook.com/EMCCorp to get caught up on what I’ve missed so far and there was more in there than I expected.

Buried in the hour long keynote about EMC’s portfolio and how it aligns with Big Data and Cloud Computing were several important announcements.

  1. Atmos 2.0 – Improved Performance, Compatibility with Amazon S3, and a Windows native client called GeoDrive.   Atmos already offered a client for Redhat Linux which joined the Redhat client into the Atmos Cloud for direct access to objects/files.  GeoDrive provides native file access to the Atmos cloud for Windows clients.  This is particularly useful for supporting legacy applications that have not (or cannot) be modified to use the Atmos RESTful API.
  2. Isilon NL108 – The new NearLine nodes contain 108TB of disk space in each node, and this increase in node capacity has similarly increased the maximum filesystem size of an Isilon cluster from 10PB to 15PB (In a SINGLE filesystem..  And yes, that’s PetaBytes).
  3. Project Lightning – PCIe based FLASH caching adapters for Servers.  Intended to work in conjunction with FASTVP to cache data at the server and possibly distributed across servers.
  4. EMC Hadoop (aka Greenplum HD) – EMC supplied and supported Hadoop distribution which can be acquired as software or as an appliance, similar to Greenplum.     

Lot’s more to come.  Chuck Hollis just sat down here so it’s time to socialize..

Performance Analysis for Clariion and VNX – Part 4

Posted on by

<< Back to Part 3 — Part 4 — Go to Part 5 >>

Making Lemonade from Lemons.

In the last post, we looked at the storage processor statistics to check for cache health, excessive queuing, and response time issues and found that SPA has some performance degradation which seems to be related to write IO.  Now we need to drill down on the individual LUNs to see where that IO is being directed.  This is done in the LUN tab of Analyzer.  First, right click on the storage array itself in the left pane and choose deselect all -> items.  Then click the LUN tab and right click on the top level of the tree “LUNs”, choose select all -> LUNs.  Click on one of the LUNs to highlight it, then in choose Write Throughput (IO/s) from the bottom pane.  It may take a second for Analyzer to render the graph but you’ll end up with something like this…

You’ll quickly realize that this view doesn’t really help you figure out what’s going on.  With many LUNs, there is simply too much data to display it this way.  So click the clipboard button that has the I’s and O’s in it (next to the red arrow) to copy the graph data (in CSV format) into your desktop clipboard.  Now launch Microsoft Excel, select cell A1 and type Ctrl-V to paste the data.  It will look like the following image at first, with all LUNs statistics pasted into Column A.

Now we need to break out the various metrics into their own columns to make meaningful data, so go to the Data menu and click Text to Columns (see red arrow above).  Select Delimited, click Next..  Select ONLY comma as the delimiter, then next, next, finish.  Excel will separate the data into many columns (one column per LUN).  Next we’ll create a graph that can actually tell us something.  First, click the triangle button at the upper left corner of the sheet to select all of the data in the sheet at once.  Then click the area chart icon, select Area, then the Stacked Area (see Red Arrows below) icon.  Click OK.

You’ll get a nice little graph like this one below that is completely useless because the default chart has the X and Y axis reversed from what we need for Analyzer data.

To Fix this, right click on the graph, choose “Select Data”, click the Switch row/column button, and click OK.

Now you have a useful graph like the one below.  What we are seeing here is each band of color representing the Write IOPS for a particular LUN.  You’ll note that about 6 LUNs have very thick bands, and the rest of the over 100 LUNs have very small bands.  In this case, 6 LUNs are driving more than 50% of the total write IOPS on the array. Since the column header in the Excel sheet has the LUN data, you can mouse over the color band to see which LUN it represents.

Now that you know where to look, you can go back to Analyzer, deselect all LUNs and drill down to the individual LUNs you need to look at.  You may also want to look at the hosts that are using the busy LUNs to see what they are doing.  In Analyzer, check the Write IO Size for the LUNs you are interested in and see if the size is in line with your expectations for the application involved. Very large IO sizes coupled with high IOPS (ie: high bandwidth) may cause write cache contention.  In the case of this particular array, these 6 LUNs are VMFS datastores, and based on the Thin LUN space utilization and write IO loads, I would recommend that the customer convert them from Thin LUNs to Thick LUNs in the same Virtual Pool.  Thick LUNs have better write performance and lower processor overhead compared with Thin LUNs and the amount of free space in these Thin LUNs is fairly small.  This conversion can be done online with no host impact using LUN Migration.

You can use this copy/paste technique with Excel to graph all sorts of complex datasets from Analyzer that are pretty much not viewable with the default Analyzer graph.  This process lets you select specific data or groups of metrics from an complete Analyzer archive and graph just the data you want, in the way you want to see it.  There is also a way to do this as a bulk export/import, which can be scheduled too, and I’ll discuss that in the next post.

<< Back to Part 3 — Part 4 — Go to Part 5 >>