CacheCade Pro 2.0 Cache Pool Mirror Hot Data Cold Data Figure 1 CacheCade Pro 2.0 software intelligently copies hot data to low latency, redundant SSD cache. Figure 2 Typical database application IO profile shows read and write activity concentrated in specific disk volume regions. Cachecade Pro 2 0 Keygen 33 - Business Intelligence (BI).
Today we are evaluating the newest version of CacheCade, a storage caching system developed by LSI for use with their MegaRAID controllers. This system aims to create the perfect fusion between HDD capacity and SSD speed, accelerating new or existing storage subsystems that are already in place.
![Pro Pro](https://i.ebayimg.com/images/g/HvkAAOSwo4pYEkP~/s-l640.jpg)
A previous CacheCade v1.0 review of the software illustrated some of the extreme performance gains that are to be had with the selective intelligent caching of data in storage subsystems.
That version, while extremely successful and a boon for users on a global scale, had one major limitation as it cached only read data.
LSI has had the intentions of including write caching into the structure of CacheCade as the product matured from the very onset. Today, that goal has been realized, and both read and write data is capable of being cached. This truly is an amazing achievement, as other competing products (ex. MaxIQ from Adaptec) are limited to caching read data only. This puts LSI in an enviable position of offering something that other companies simply cannot.
SSDs have exploded into the data center realm as an extremely disruptive technology that is redefining storage performance and possibilities, not only for the users, but also for the manufacturers of RAID controllers and storage products.
The ever-increasing demands for data in today's burgeoning data centers leaves them with few options, either continuing the safe and beaten path of ‘inexpensive' exponential HDD array growth, or delving into the super accelerated and ‘expensive' SSD realm. This can leave many with large databases and enterprise users ‘caught in the middle' with large HDD arrays, yet needing more performance from their existing infrastructure. Cost and space constraints are always a concern, and that is the void that CacheCade aims to fill.
The inclusions of both read and write caching is taking this technology further into the realm of a must-have for the users who are looking to actually reduce their costs, whilst boosting performance. Many still consider SSDs as far too expensive for integration into the data center, but that is one of the inherent issues that CacheCade Pro is looking to address with this technology.
CACHECADE PRO
CacheCade Pro is a new technology that falls into a category of 'Tiered Storage' and allows for another ‘Tier' of cache to be built upon a current HDD subsystem. Currently in a typical large enterprise server, you have RAID controllers connected to large mass arrays of HDD that serve data out to ‘customers'. There is a level of cache that is built into the controller, however, this amount of cache is very small. It is usually around 1GB, and rarely above 4 GB of RAM, and mainly used to alleviate high write loads. On server arrays where the amount of data can go into hundreds of Terabytes, the 1-4GB of cache is rather inconsequential.
Enter the CacheCade theory, which creates a supplemental layer of cache between the HDD and the controller. This cache can be much larger using SSDs that enjoy much lower access times and much higher random I/O speeds than HDD can ever hope to attain. One of the main weaknesses of the HDD base array is small random writes. The inclusion of write caching with this version will allow a tremendous amount of off-loading of the most strenuous type activity that the drives are asked to do. In the typical server usage model the mixed read/write data access is by far the most demanding. Any alleviation of the write burden will yield exponential results.
Let's start with an understanding of the advantages and disadvantages of each solution:
HDD ARRAYS
Pros
- Low Cost-When it comes to cost per GB, HDD reigns
- Usually involves existing infrastructure
- High storage capacity
Cons
- High power consumption
- High Latency (Slow)
- Large footprint
- Produce heat/need cooling
SSD ARRAYS
Pros
- Extreme speed compared to HDD
- Low latency
- Low power requirements
- Generate low amount of heat (negligible)
- Small footprint
Cons
- Price Per-GB is very high, especially for SLC drives
- Low capacity
- Endurance concerns
In the end, we have a complex mixture of factors here for one to consider. If the customer already has a large system of HDD equipped servers, then they already have their infrastructure in place. Power, Cooling and Space are all very large concerns in these types of scenarios, however, if you are looking to speed up your systems to handle larger loads, you will need more of all three.
Switching to an all SSD or SAS HDD setup can be very cost prohibitive and very complex, especially in scenarios where downtime is absolutely not tolerated. SSDs are much easier in terms of power, cooling and space, notwithstanding the fact that one SSD can do the work of several HDDs.
The key is to accelerate the performance of your new or existing HDD arrays without incurring the massive costs and investments that a total switch to SSD would require.
That is exactly the concept behind CacheCade Pro.
NEXT: Basic Concepts and Application
~ Introduction ~ Basic Concepts and Application ~
~ Enter Write Caching ~ Exploring TCO ~ Test Bench and Protocol ~
~ Single Zone Results ~ Overlapped Region Results ~
~ Real World Results and Conclusion ~
I'm the guy that likes to abuse my RAID arrays, described here and here. So this success with CacheCade Pro 2.0 is that much sweeter, even if it took about a year and a half to fully bake.
This vZilla project, which inspired me to start this web site on June 1 2011 in the first place, was in jeopardy of having a failed storage strategy. Without CCPro2 for SSD caching of my 5.6TB RAID5 array, I could obtain similar speeds at a much lower price than my LSI 9265-8i. But victory at last achieved last night, with initial tests indicating a favorable outcome.
Stay with me here to the end of this saga, where you'll see some initial benchmark tests, and rather impressive results!
Timeline, June 2011 to January 2013:
I've had the LSI 9265-8i RAID controller since the summer of 2011, with the promise of having excellent read and write speeds for my RAID5 array, explained here:
Good RAID Controllers with SSD caching and ESX support
TinkerTry.com/goodraidcontrollerswithssdcachingandesxsupport
Aug 13, 2011 07:14 pm
I soon decided to return my 9260-8i, going instead with the RAID5 CacheCade Pro 2.0 capable 9265-8i. Why? Only that model supported RAID5 read+write caching:
![Cachecade Pro 2.0 Keygen Cachecade Pro 2.0 Keygen](https://ae01.alicdn.com/kf/Hb129f67b10af4a9684e26056dffd88c38/Printers-software-AcroRIP-9-0-RIP-dongle-Lock-key-for-Epson-printhead-print-colors-white-at.jpg_220x220q90.jpg)
Z68 Sandybridge Motherboard VT-d Test Matrix: Which Mobo/CPU combo works with VMware ESXi 4.1U1 VMDirectPath feature?
TinkerTry.com/vmdirectpath
Jul 14, 2011 06:56 pm
By December, I had fully baked my storage strategy, and published it at:
The reasoning behind vZilla's storage configuration
TinkerTry.com/vzillastoragereasoning
Dec 05, 2011 09:27 pm
Basically, I'd format the entire internal RAID5 array as one big 5.6TB VMFS storage device in ESXi 5. I'd then put my VMs on that array, which would hopefully perform quite well, especially upon performing procedures the 2nd and 3rd time (for caching effects to start helping).
By May, the prequisite hardware key and firmware to support CacheCade Pro 2.0 finally arrive:
Watts up for laptop. LSI CacheCade Pro 2.0 / FastPath FAQ has arrived, and so have the 30 day trial keys!
TinkerTry.com/lsi-cachecade-pro-2-0-faq
May 02, 2012 10:59 pm
But then I had a nasty scrape with losing my array with an early firmware, and a entirely unsupported OCZ Vertex 4 (so this 'loss' was entirely my own fault):
Playing with LSI CacheCade Pro 2.0: The OCZ Vertex 4 256GB SSD (VTX4-25SAT3-256G) (Caution!)
TinkerTry.com/oczvertex4
May 03, 2012 01:33 am
The actual RAID array drop can be seen in this newly published video, which is just a first-stab attempt at establishing whether my read and write speeds are now in the right ball-park:
Only non-critical VMs that were left running at the time of the failure were affected, all other data on the array was unaffected by this incident. I also need to emphasize that the OCZ Vertex is still not on the CC Pro SSD compatibility list from Dec 2012 found on the 9265-8i Resources tab, with the details on pages 26 to 29:
Interoperability Report for MegaRAID Value and Feature 6Gb-s SAS Controllers, Dec 13 2013
Worse yet, I had a problem with later firmwares and tests with my Samsung 830 256GB SSD. I couldn't enable CacheCade Pro 2.0 for reads and writes. Which defeated the whole purpose of having this card. The exact issue can be seen in this video:
So, working with LSI Technical Support for months (and sharing the above videos with LSI), we decided to try to get to the bottom of why I couldn't enable read and write caching on my particular configuration by simply replacing my LSI00290 CacheCade Pro 2.0 hardware key:
I removed the old key, which made my array Foreign, and not import-able. In other words, I should have done the wise thing, which would have been to disable CacheCade Pro 2.0 in the MegaRAID UI before shipping it in for exchange, oops!
A week later, LSI had mercy on me, and kindly overnighted the new LSI00290. Thank you Sean and Jason! It arrived on January 4th, 2013. I then let it warm up to room temp, installed it, and tada, complete success, seen in the video below:
Here's the before (no CacheCade) and after (CacheCade Pro 2.0 enabled) results, using ATTO Disk Benchmark with 2GB setting, and a thick provisioned 750GB C: drive residing on this ESXi 5 VMFS formatted RAID5 array:
Drum roll please, here's the results on the 1st run (seen in the video) after enabling CacheCade Pro 2.0 in my MegaRAID VM:
'Real' benchmark tips here:
By subscribing to our mailing list you will always be updated with the latest news and information from Darkroom Software. Address: 1101 Summit Ave, Plano, Texas 75074 U.S.A. Call: 214-390-3258. ExpressDigital Darkroom Web Edition 8.9 Choose the most popular programs from Design & Photo software 8.91.1678 (See all) Express Digital Graphics, Inc. Darkroom pro download. Darkroom Professional Edition 8.9 Sentinel Dongle Darkroom Professional Edition Test with Sentinel Dongle Emulator / Clone. Darkroom Professional ExpressDigital used to be situated in 1994 when a passion photographer and a home windows programmer proposed a approach to create custom baseball. Version 8.9 Pro. Comes with CD, case, usb dongle for software activation and manual. Will ship it fast by UPS. Product Features Darkroom Professional Edition allows photographers to create limitless print packages, apply graphic templates, edit and enhance photos, manage images, archive images and more.
LSI MegaRAID Controller Benchmark Tips
www.lsi.com/downloads/Public/Direct%20Assets/LSI/Benchmark_Tips.pdf
November 6, 2012
which I'm frankly not as interested in as real-world use and home-brew tests I'll be doing that are meaningful to me in my configuration. But I do take from it that I should look at the effects of moving the queue depth higher, with results seen below.
Time will tell if the Samsung 256GB PM830 SSD works out. I'm using the latest firmware, which is still CXM03B1Q Release Date: 2012-01-1 from here:
www.samsung.com/us/support/owners/product/MZ-7PC256N/AM
but it isn't an exact match for the model listed on page 28 of the Interoperability Report:
Samsung 256GB PM830, MZ7PC256HAFU-000DA with firmware 1W1Q
The Intel and other well-known-brand models they do list tend to be enterprise centric (pricey), naturally. It's not 'normal' for home-build virtualization enthusiasts to use these for single-user storage labs.
But the interoperability list has grown considerably since last summer, back when I met a LSI engineer at VMworld 2012 in San Francisco, discussing this whole matter at length.
Let's hope gems like the beloved Samsung 840 Pro show up soon! Meanwhile, I may just wind up leaving my array as is, and move on to more important projects, like rebuilding my Windows Server 2012 Essentials system for daily PC backups, now that I know my array and storage strategy seems to be performing well, and is stable, so far..
Lessons learned:
1) Thin Provisioning
Thin provisioning on the RAID5 array versus Thick Provisioning seemed to have no effect on performance of ATTO Disk Benchmark, at all.
Thin provisioned results look the same as thick provisioned results (NTFS VM on the same RAID5 array looks largely like the thick provisioned first test seen above), and you'll also note that multiple runs ran the same each time, so results were consistent, seen here:
2) 2GB dataset required for ATTO Disk Benchmark on caching RAID controllers
The default 256MB fits in the 1GB controller cache, greatly/artificially inflating benchmark results:
3) moving from default Queue Dept of 4 to recommended 10 gives a considerable boost to the 16K and under results
This hints that a heavy, mixed workload of a lot of VMs at once doing small IOs is where this controller will really shine, given it's the speed at 4K that really matters:
Cachecade Pro 2.0 Keygen 1
Additional references:
LSI MegaRAID CacheCade Pro 2.0 Software Evaluation, By Boston Limited
LSI 9265-8i & VMware ESXi (health monitoring, MegaRAID UI in a VM, & CacheCade 2.0)
Update Jan. 06, 2013:
Looking backing, if I were buying a caching RAID controller today, things would likely only change a little.
Cachecade Pro 2.0 Keygen 2017
Adaptec MaxCache 3.0 has come along to compete with LSI CacheCade 2.0 Pro, for example.
One model, the Adaptec 7805Q at around $1100:
www.adaptec.com/en-us/products/series/7q
has this read and write caching, and supercapacitor based Zero-Maintenance Cache Protection.
![Pro Pro](https://www.tronicmart.de/wp-content/uploads/2019/08/7TUG8x6wtMKqZeSgBJCkl.jpeg)
Such supercapacitor protection is on the LSI 9286CV-8eCC as well, called MegaRAID CacheVault Flash Cache Protection. But that unit still has the same performance as my 9265-8i, using the same LSISAS2208 Dual- core 6Gb/s ROC - x2 800MHz PowerPC Processors. The bus speed is boosted, with with my modest needs and overall throughput, I really doubt I'd notice any difference in my speeds.
In the end, it appears I'm not missing out on some amazing speed boost at a lower price. So no regrets. And if I were starting all over today, I'd likely evaluate the Adaptec 7805Q against the 9286CV-8eCC first hand, using a bunch of benchmarks and real-world usage to determine my actual speeds seen during day-to-day home lab usage.
I'll be watching:
![2.0 2.0](https://docplayer.net/docs-images/40/9396798/images/page_18.jpg)
A previous CacheCade v1.0 review of the software illustrated some of the extreme performance gains that are to be had with the selective intelligent caching of data in storage subsystems.
That version, while extremely successful and a boon for users on a global scale, had one major limitation as it cached only read data.
LSI has had the intentions of including write caching into the structure of CacheCade as the product matured from the very onset. Today, that goal has been realized, and both read and write data is capable of being cached. This truly is an amazing achievement, as other competing products (ex. MaxIQ from Adaptec) are limited to caching read data only. This puts LSI in an enviable position of offering something that other companies simply cannot.
SSDs have exploded into the data center realm as an extremely disruptive technology that is redefining storage performance and possibilities, not only for the users, but also for the manufacturers of RAID controllers and storage products.
The ever-increasing demands for data in today's burgeoning data centers leaves them with few options, either continuing the safe and beaten path of ‘inexpensive' exponential HDD array growth, or delving into the super accelerated and ‘expensive' SSD realm. This can leave many with large databases and enterprise users ‘caught in the middle' with large HDD arrays, yet needing more performance from their existing infrastructure. Cost and space constraints are always a concern, and that is the void that CacheCade aims to fill.
The inclusions of both read and write caching is taking this technology further into the realm of a must-have for the users who are looking to actually reduce their costs, whilst boosting performance. Many still consider SSDs as far too expensive for integration into the data center, but that is one of the inherent issues that CacheCade Pro is looking to address with this technology.
CACHECADE PRO
CacheCade Pro is a new technology that falls into a category of 'Tiered Storage' and allows for another ‘Tier' of cache to be built upon a current HDD subsystem. Currently in a typical large enterprise server, you have RAID controllers connected to large mass arrays of HDD that serve data out to ‘customers'. There is a level of cache that is built into the controller, however, this amount of cache is very small. It is usually around 1GB, and rarely above 4 GB of RAM, and mainly used to alleviate high write loads. On server arrays where the amount of data can go into hundreds of Terabytes, the 1-4GB of cache is rather inconsequential.
Enter the CacheCade theory, which creates a supplemental layer of cache between the HDD and the controller. This cache can be much larger using SSDs that enjoy much lower access times and much higher random I/O speeds than HDD can ever hope to attain. One of the main weaknesses of the HDD base array is small random writes. The inclusion of write caching with this version will allow a tremendous amount of off-loading of the most strenuous type activity that the drives are asked to do. In the typical server usage model the mixed read/write data access is by far the most demanding. Any alleviation of the write burden will yield exponential results.
Let's start with an understanding of the advantages and disadvantages of each solution:
HDD ARRAYS
Pros
- Low Cost-When it comes to cost per GB, HDD reigns
- Usually involves existing infrastructure
- High storage capacity
Cons
- High power consumption
- High Latency (Slow)
- Large footprint
- Produce heat/need cooling
SSD ARRAYS
Pros
- Extreme speed compared to HDD
- Low latency
- Low power requirements
- Generate low amount of heat (negligible)
- Small footprint
Cons
- Price Per-GB is very high, especially for SLC drives
- Low capacity
- Endurance concerns
In the end, we have a complex mixture of factors here for one to consider. If the customer already has a large system of HDD equipped servers, then they already have their infrastructure in place. Power, Cooling and Space are all very large concerns in these types of scenarios, however, if you are looking to speed up your systems to handle larger loads, you will need more of all three.
Switching to an all SSD or SAS HDD setup can be very cost prohibitive and very complex, especially in scenarios where downtime is absolutely not tolerated. SSDs are much easier in terms of power, cooling and space, notwithstanding the fact that one SSD can do the work of several HDDs.
The key is to accelerate the performance of your new or existing HDD arrays without incurring the massive costs and investments that a total switch to SSD would require.
That is exactly the concept behind CacheCade Pro.
NEXT: Basic Concepts and Application
~ Introduction ~ Basic Concepts and Application ~
~ Enter Write Caching ~ Exploring TCO ~ Test Bench and Protocol ~
~ Single Zone Results ~ Overlapped Region Results ~
~ Real World Results and Conclusion ~
I'm the guy that likes to abuse my RAID arrays, described here and here. So this success with CacheCade Pro 2.0 is that much sweeter, even if it took about a year and a half to fully bake.
This vZilla project, which inspired me to start this web site on June 1 2011 in the first place, was in jeopardy of having a failed storage strategy. Without CCPro2 for SSD caching of my 5.6TB RAID5 array, I could obtain similar speeds at a much lower price than my LSI 9265-8i. But victory at last achieved last night, with initial tests indicating a favorable outcome.
Stay with me here to the end of this saga, where you'll see some initial benchmark tests, and rather impressive results!
Timeline, June 2011 to January 2013:
I've had the LSI 9265-8i RAID controller since the summer of 2011, with the promise of having excellent read and write speeds for my RAID5 array, explained here:
Good RAID Controllers with SSD caching and ESX support
TinkerTry.com/goodraidcontrollerswithssdcachingandesxsupport
Aug 13, 2011 07:14 pm
I soon decided to return my 9260-8i, going instead with the RAID5 CacheCade Pro 2.0 capable 9265-8i. Why? Only that model supported RAID5 read+write caching:
Z68 Sandybridge Motherboard VT-d Test Matrix: Which Mobo/CPU combo works with VMware ESXi 4.1U1 VMDirectPath feature?
TinkerTry.com/vmdirectpath
Jul 14, 2011 06:56 pm
By December, I had fully baked my storage strategy, and published it at:
The reasoning behind vZilla's storage configuration
TinkerTry.com/vzillastoragereasoning
Dec 05, 2011 09:27 pm
Basically, I'd format the entire internal RAID5 array as one big 5.6TB VMFS storage device in ESXi 5. I'd then put my VMs on that array, which would hopefully perform quite well, especially upon performing procedures the 2nd and 3rd time (for caching effects to start helping).
By May, the prequisite hardware key and firmware to support CacheCade Pro 2.0 finally arrive:
Watts up for laptop. LSI CacheCade Pro 2.0 / FastPath FAQ has arrived, and so have the 30 day trial keys!
TinkerTry.com/lsi-cachecade-pro-2-0-faq
May 02, 2012 10:59 pm
But then I had a nasty scrape with losing my array with an early firmware, and a entirely unsupported OCZ Vertex 4 (so this 'loss' was entirely my own fault):
Playing with LSI CacheCade Pro 2.0: The OCZ Vertex 4 256GB SSD (VTX4-25SAT3-256G) (Caution!)
TinkerTry.com/oczvertex4
May 03, 2012 01:33 am
The actual RAID array drop can be seen in this newly published video, which is just a first-stab attempt at establishing whether my read and write speeds are now in the right ball-park:
Only non-critical VMs that were left running at the time of the failure were affected, all other data on the array was unaffected by this incident. I also need to emphasize that the OCZ Vertex is still not on the CC Pro SSD compatibility list from Dec 2012 found on the 9265-8i Resources tab, with the details on pages 26 to 29:
Interoperability Report for MegaRAID Value and Feature 6Gb-s SAS Controllers, Dec 13 2013
Worse yet, I had a problem with later firmwares and tests with my Samsung 830 256GB SSD. I couldn't enable CacheCade Pro 2.0 for reads and writes. Which defeated the whole purpose of having this card. The exact issue can be seen in this video:
So, working with LSI Technical Support for months (and sharing the above videos with LSI), we decided to try to get to the bottom of why I couldn't enable read and write caching on my particular configuration by simply replacing my LSI00290 CacheCade Pro 2.0 hardware key:
I removed the old key, which made my array Foreign, and not import-able. In other words, I should have done the wise thing, which would have been to disable CacheCade Pro 2.0 in the MegaRAID UI before shipping it in for exchange, oops!
A week later, LSI had mercy on me, and kindly overnighted the new LSI00290. Thank you Sean and Jason! It arrived on January 4th, 2013. I then let it warm up to room temp, installed it, and tada, complete success, seen in the video below:
Here's the before (no CacheCade) and after (CacheCade Pro 2.0 enabled) results, using ATTO Disk Benchmark with 2GB setting, and a thick provisioned 750GB C: drive residing on this ESXi 5 VMFS formatted RAID5 array:
Drum roll please, here's the results on the 1st run (seen in the video) after enabling CacheCade Pro 2.0 in my MegaRAID VM:
'Real' benchmark tips here:
By subscribing to our mailing list you will always be updated with the latest news and information from Darkroom Software. Address: 1101 Summit Ave, Plano, Texas 75074 U.S.A. Call: 214-390-3258. ExpressDigital Darkroom Web Edition 8.9 Choose the most popular programs from Design & Photo software 8.91.1678 (See all) Express Digital Graphics, Inc. Darkroom pro download. Darkroom Professional Edition 8.9 Sentinel Dongle Darkroom Professional Edition Test with Sentinel Dongle Emulator / Clone. Darkroom Professional ExpressDigital used to be situated in 1994 when a passion photographer and a home windows programmer proposed a approach to create custom baseball. Version 8.9 Pro. Comes with CD, case, usb dongle for software activation and manual. Will ship it fast by UPS. Product Features Darkroom Professional Edition allows photographers to create limitless print packages, apply graphic templates, edit and enhance photos, manage images, archive images and more.
LSI MegaRAID Controller Benchmark Tips
www.lsi.com/downloads/Public/Direct%20Assets/LSI/Benchmark_Tips.pdf
November 6, 2012
which I'm frankly not as interested in as real-world use and home-brew tests I'll be doing that are meaningful to me in my configuration. But I do take from it that I should look at the effects of moving the queue depth higher, with results seen below.
Time will tell if the Samsung 256GB PM830 SSD works out. I'm using the latest firmware, which is still CXM03B1Q Release Date: 2012-01-1 from here:
www.samsung.com/us/support/owners/product/MZ-7PC256N/AM
but it isn't an exact match for the model listed on page 28 of the Interoperability Report:
Samsung 256GB PM830, MZ7PC256HAFU-000DA with firmware 1W1Q
The Intel and other well-known-brand models they do list tend to be enterprise centric (pricey), naturally. It's not 'normal' for home-build virtualization enthusiasts to use these for single-user storage labs.
But the interoperability list has grown considerably since last summer, back when I met a LSI engineer at VMworld 2012 in San Francisco, discussing this whole matter at length.
Let's hope gems like the beloved Samsung 840 Pro show up soon! Meanwhile, I may just wind up leaving my array as is, and move on to more important projects, like rebuilding my Windows Server 2012 Essentials system for daily PC backups, now that I know my array and storage strategy seems to be performing well, and is stable, so far..
Lessons learned:
1) Thin Provisioning
Thin provisioning on the RAID5 array versus Thick Provisioning seemed to have no effect on performance of ATTO Disk Benchmark, at all.
Thin provisioned results look the same as thick provisioned results (NTFS VM on the same RAID5 array looks largely like the thick provisioned first test seen above), and you'll also note that multiple runs ran the same each time, so results were consistent, seen here:
2) 2GB dataset required for ATTO Disk Benchmark on caching RAID controllers
The default 256MB fits in the 1GB controller cache, greatly/artificially inflating benchmark results:
3) moving from default Queue Dept of 4 to recommended 10 gives a considerable boost to the 16K and under results
This hints that a heavy, mixed workload of a lot of VMs at once doing small IOs is where this controller will really shine, given it's the speed at 4K that really matters:
Cachecade Pro 2.0 Keygen 1
Additional references:
LSI MegaRAID CacheCade Pro 2.0 Software Evaluation, By Boston Limited
LSI 9265-8i & VMware ESXi (health monitoring, MegaRAID UI in a VM, & CacheCade 2.0)
Update Jan. 06, 2013:
Looking backing, if I were buying a caching RAID controller today, things would likely only change a little.
Cachecade Pro 2.0 Keygen 2017
Adaptec MaxCache 3.0 has come along to compete with LSI CacheCade 2.0 Pro, for example.
One model, the Adaptec 7805Q at around $1100:
www.adaptec.com/en-us/products/series/7q
has this read and write caching, and supercapacitor based Zero-Maintenance Cache Protection.
Such supercapacitor protection is on the LSI 9286CV-8eCC as well, called MegaRAID CacheVault Flash Cache Protection. But that unit still has the same performance as my 9265-8i, using the same LSISAS2208 Dual- core 6Gb/s ROC - x2 800MHz PowerPC Processors. The bus speed is boosted, with with my modest needs and overall throughput, I really doubt I'd notice any difference in my speeds.
In the end, it appears I'm not missing out on some amazing speed boost at a lower price. So no regrets. And if I were starting all over today, I'd likely evaluate the Adaptec 7805Q against the 9286CV-8eCC first hand, using a bunch of benchmarks and real-world usage to determine my actual speeds seen during day-to-day home lab usage.
I'll be watching:
Anyone have any thoughts on the newer Adaptec RAID adapters?
forums.servethehome.com/showthread.php?1131-Anyone-have-any-thoughts-on-the-newer-Adaptec-RAID-adapters
I don't have many spindles, but I'm burning more watts for my five 1.5TB drives than I'd like, so I find this interesting:
blog.insanegenius.com/category/power
Cachecade Pro 2.0 Keygen
See also the related graphic from https://tinkertry.com/vzillastoragereasoning