Monthly Archives: May 2009

Capacity vs. Performance : Why do I have so much free space on my SAN and why can’t I use it?

Posted on by

In the past–in the days of 2GB,4GB,9GB,18GB and even 36GB drives–when you were tasked with purchasing and configuring hard drives for an application, you were given the amount of storage space required for the application and that was pretty much good enough. If you or your company were more organized you’d do an analysis of the performance requirements for that application (ie: IOPS, read/write ratios, bandwidth, etc.) to make sure you had enough spindles to accommodate the application. More often than not, the capacity requirements necessitated more disk than the performance so you’d build your RAID group and fill it up all the way.

Fast forward a few years and 72GB drives are no longer available, 146GB drives are getting close to end-of-sale and there are 300, 400, 600GB drives, and terabyte SATA drives available for almost any storage system or server. The problem is that as these hard drives get bigger, they aren’t getting any faster. In fact, SATA drives are relatively new in the Enterprise space and are slower than traditional 10,000 and 15,000 RPM SCSI drives. But they hold terabytes of data. Today, performance is the primary requirement and capacity is second because in general you need more spindles for the performance of your application than you do to achieve the capacity requirement.

As an example, let’s take a 100GB SQL database that requires 800 IOPS at 50% Read/50% Write.

Back in the day with 18GB drives you’d need 12 disks to provide ~100GB of space in RAID10. Using SCSI-3 10K drives, you can expect about 140 IOPS per disk giving you 1680 IOPS available. Accounting for RAID10 write penalties, you’d have an effective 1100 IOPS, more than enough for your workload of 800 IOPS.

Today, a single 146GB 10K disk can provide all the capacity required for this database; but you still need at least 10 disks to achieve your 800 IOPS workload with RAID10, or 15 disks with RAID5. The capacity of a RAID10 group with ten 146GB drives is approximately 680GB, leaving you with 580GB of free (or slack) space in the RAID group. The trouble is that you can’t use that space for any of your other applications because the SQL database requires all of the performance available in that RAID group. Change it to RAID5, or use new larger disks, and it’s even worse. Switching to 15K RPM drives can help, but it’s only a 30% increase in performance.

If you are managing SAN storage for a large company, your management probably wants you to show them high disk capacity utilization on the SAN to help justify the cost of storage consolidation. But as the individual disk sizes get larger, it becomes increasingly difficult to keep the capacity utilization high, and for many companies it ends up dropping. Thin Provisioning and De-Duplication technologies are all the rage right now as storage companies push their wares, and customers everywhere are hoping that those buzzwords can somehow save them money on storage costs by increasing capacity utilization. But be aware, if you have slack space due to performance requirements, those technologies won’t do you any good and could hurt you. They are useful for certain types of applications, something I’ll discuss in a later post.

So what do you do? Well, there’s not a lot you can do except educate your management on the difference between sizing for performance and sizing for capacity. They should be aware that slack space is a byproduct of the ever increasing size of hard disk drives. Some vendors are selling high speed flash or SSD disks for their SAN storage systems which can be 30-50X faster than a 15K RPM drive and have similar capacities. But flash has a significant cost which only makes sense if you can leverage most of the IOPS available in each disk. In the next installment I’ll discuss tiered data techniques and how they can overcome some of these problems, increasing performance in some cases while also increasing utilization rates.

Underwater Photography

Posted on by 0 comment

Over the past few years I’ve started to travel a bit more and some of that travel has been to tropical locales like Maui, Mexico, and most recently Polynesia.  Of course any time you travel to a tropical place one of the obligatory tourist things to do is snorkel.

In February 2007, I went to Maui for a friends wedding and I took my Nikon D70 and Canon PowerShot Elph SD100 along.  I’d had the SD100 for a few years and it was pretty beat up but it still worked.  So I picked up a Canon Underwater Case for it prior to the trip hoping I could get some underwater photos while snorkeling. The case worked very well but I found that the photos lacked much color, mostly looking blue.  I experimented with a few things and did some research and found that Red is the first color to be filtered out in water.  So I boosted the red on a bunch of the photos and that helped a lot.

During our honeymoon in April 2009, I had my newer Nikon D90 and a Canon PowerShot Elph SD1100 (which replaced my SD100 after it finally stopped working reliably).  I picked up a new Canon Underwater case (since they are camera model specific) for the SD1100 prior to the trip and again took it snorkeling.  The water in French Polynesia is a little clearer and the sun a little brighter than Maui so the colors were better.  But still I ran into color problems like too much blue, and not enough red.  However this time boosting Red didn’t seem to help, in fact hurting the image.

I finally figured out the trick though.  My seldom used Custom White Balance option.  Using Adobe Lightroom 2 (my current favorite), or most any RAW photo workflow tool (Bibble, Aperture, Nikon NX) you can adjust white balance of the photo manually or use a picker to choose an area of white for the software to automatically determine proper white balance. I snorkeled in 5 different places during the trip and each had unique lighting and differing depths which changed the result.

Methodology:
First of all, I did not use the flash since it highlighted floating bubbles and sand in the water.  For each unique snorkel location, I tested the White Balance picker tool on 3 different photos using various points of the images that should be white.  I noted the Tint and Temp that Lightroom determined from each point and picked the values for each that seemed to be the average for all of the tests.  Then applied the final Temp/Tint values to all of the photos in the set.  I also increase the Clarity and Vibrance values on all of the photos since it helped to bring out the fish and coral from the background.  I did not adjust saturation or curves in any way.

Examples:
Rangiroa, Tuamotu (salt mixing cloudiness, sunny, lots of shadows, 20 feet+ deep)
    Temp +32, Tint +54
Fakarava, Tuamotu (very clear water, lots of sun, 5-6 feet deep)
    Temp +11, Tint +35
Bora Bora, Society Islands #1 (sand mixing cloudiness, cloudy day, 3-4 feet deep)
    Temp +23, Tint +60
Bora Bora, Society Islands #2 (very clear water, cloudy day, 40+ feet deeo)
    Temp +39, Tint +73
Taha’a, Society Islands (very clear water, lots of sun, 4-5 feet deep)
    Temp +11, Tint +35

The results turned out pretty good in my opinion and you’ll note that in similar conditions (Fakarava vs. Taha’a for example) the white balance holds pretty constant.  Keep in mind that the original photos were JPGs from a point and shoot camera, not RAWs from a DSLR like the D90 where making White Balance changes is easier.

For examples of all the underwater photos I took with the old and new methods, check out my Flickr Underwater set…  http://www.flickr.com/photos/techsavvy/sets/72157594554826402

Category: photography