According to our friends at IDC, the number of datacenters worldwide (well over 8 million now!) is expected to peak this year and then start to decline.   Initially, that part about declining might seem a bit surprising, because the amount of compute and storage consumed by the world shows no signs of slowing, but the key trend is that datacenters are consolidating from many, smaller configurations to fewer, larger, scale-out installations.

As we all know, the move to cloud services is what’s behind this change.  We all desire IT services that are increasingly nimble and ever-scalable, and that’s most easily accomplished by letting someone else build out monster datacenters and then we pay for just what we need.  All well and good, but as we pile more into fewer datacenters, this huge scale of deployment must make substantial strides in efficiency.

Today’s scale-out datacenter architecture, invented by the likes of Amazon and Facebook, was built around HDDs, which are dirt cheap.  It worked fine to load several HDDs into each of your servers and you didn’t worry if they were all being used or not.  But here’s the challenge:  For a number of well-documented reasons, HDDs are out and SSDs are in, and when you deploy this same architecture with SSDs, you are wasting a lot of money. 

Here’s why:  When you combine compute and storage within a single chassis, you need to pick your ratio of compute (how many processors / cores) and storage (how many SSDs, capacity, and performance), and now you are effectively stuck with that ratio for the lifetime of your datacenter.  (Yes, you could pull every chassis out and add more storage, but this is rarely done.)  And because you want to be sure that you don’t run out of storage for that yet-to-be-written application somewhere down the road, you put an extra SSD (or two) in there, just to be safe.

The result:  Poor storage utilization.  If you size every compute/storage node to be able to handle your most storage-hungry application (and you really need to), then on average, a lot of your expensive SSDs are doing nothing.  We’ve seen utilization rates as low as 5% and about 40%, on average.

That’s just the way it is, you say?  No, there must be a better way to do this, and there is.  It’s referred to as compute-storage disaggregation, and it’s the datacenter architecture of the not-so-distant future.

Disaggregation simply means to physically deploy compute and storage resources in two, independent pools, and then dynamically combine them back together on an as-needed basis.  The breakthrough you get with this is that you are always right-sizing the storage requirements to each application, never over-provisioning again.  And when you achieve that, your storage utilization approximately doubles.

Now you might be thinking “so what, my storage utilization metric improves… who cares?”  But think about it this way:  If your storage utilization doubles, you never needed to buy half of all those expensive SSDs!  Take the millions you were going to spend there and do something better with that money.  This is how we make substantial strides in datacenter costs and efficiency.

Back to disaggregation… you might also be thinking “didn’t we already do that with SANs?”  Yes, that is exactly what the industry invented back in the 90s with Fibre Channel (FC).  It was a good idea then, and it’s still a good way to pool and share a bunch of HDDs.  But that was built for HDDs, not SSDs, and while the industry is moving over to all Flash FC and iSCSI arrays now, it’s far less than optimal:  SANs are built around SCSI protocols, meant to talk with HDDs, and attaching SSDs via the SCSI protocol is like driving a Ferrari through your neighborhood at 25 mph – it’s a huge waste of performance.

SSDs work best when using the language invented just for SSDs, and that’s called NVMe.  There is absolutely no reason to use an inefficient protocol like SAS or SATA to talk with fast storage.  We need the modern day equivalent of a SAN, and the good news is that it has been invented and is in the process of being deployed.

What’s this new fabric look like?  Fibre Channel is out, and Ethernet is in.  But this is not your father’s Ethernet… we’re talking a high speed, supercharged version of Ethernet that uses a technique called RDMA to move data across a fabric in a handful of microseconds.  This technique, deployed in protocols like RoCE and iWARP, is what’s needed to connect pools of low-latency SSDs back to your compute nodes and do so in a way that doesn’t sacrifice the performance you expect from SSDs. 

Hardware is available in prototype form today that demonstrates well less than a 10% penalty for remotely accessing those SSDs in a disaggregated architecture.  This is a very small price to pay for all the millions you saved by disaggregating in the first place.

And the list of benefits continues:  You can depreciate all that expensive hardware on much more optimized schedules, say, 24 months for the compute side and 5 years for the storage side – that makes the accountants happy.  You can deploy resources on an as-needed basis, and you can adjust that deployment over time as your business needs change.  Storage requirements outpacing your expectations?  Just bring in a few more racks or pods of SSDs, plug them in, and put them to use.  Simple and efficient.

Disaggregated configurations are the future of datacenter architectures, and it’s just a matter of when, not if, they will be deployed.  If you have a big datacenter build-out you are planning, it’s time to think differently.  And save a pile of money while you're at it.