Tuesday, September 4, 2012

Array sizing in practice

As I mentioned in the previous post (here) array sizing has become more complex with the advent of tiering and deduplication. It's not just a function of new technology - it's also that storage is handled differently from an architectural standpoint, so even with good performance data there are still inferences that have to be made.
As a specific example, consider moving from an HDS array without HDP (meaning one that doesn't do wide striping or thin provisioning) to one that does both HDP and HDT (dynamic tiering). While per-LUN statistics are available, both HDP and HDT work with 42 MB pages. As a practical matter this means that if your busiest LUN is 200 GB in size, but the majority of that activity is concentrated into several hundred MB then you can satisfy the I/O requirements with a relatively limited amount of SSD and let the remaining capacity reside on nearline-SAS (or SATA) drives.

Recently we went through the exercise of sizing HDT for a customer considering an array refresh for an environment with roughly 2500 devices and 150 TB of usable storage, and I thought I'd share the approach and results of that exercise.

In this environment we were able to leverage Hitachi Tuning Manager to pull the last month's worth of performance data, and we began by generating a report that included the IOPs and Transfer on a per-LDEV basis (Logical Device Performance Details(7.1), if you're curious). We then ran this data through a perl script to determine the total number of IOPs for the month as well as what percentage of those IOPs each LDEV generated. We then sorted this by the %IOPs, and graphed the %IOPs vs %capacity using FusionCharts, as shown below:




You don't have to be a math major to see that something doesn't look right. The reason for the dogleg in the chart is that we initially neglected to take different LDEV sizes into account. There are a number of larger LDEVs (200 GB vs 36 GB). Going back to the data we divided the % IOPs by the LDEV capacity to normalize IOP density and did the chart again:




This gives a smoother curve, and is what we used to make recommendations for percentages of SSD vs SAS vs NL-SAS. Interestingly 20% of the capacity accounts for almost 90% of the IOPs making the key take away that a little bit of SSD makes a lot of difference.
Share/Save/Bookmark

No comments:

Post a Comment