The Distributed Data Center


 The distributed Data Center

Travis L Brown, Director of Technology,
Hutto ISD

The data center as we know it, in its massive warehouse style space is evolving. Many corporations and educational facilities are finding that the precious space demands they use to house their critical data is changing both physically and in logical design, real estate for many is not the scarce resource it once was. Density for compute and storage has reached a point where many established IT departments are not clamoring to build out their infrastructure across endless rows of servers. Instead, the data center is shrinking for many of these corporations. No doubt, big data corporations are not subject to this shift, however it’s not uncommon to find your average Technology Support department reducing the amount of racks they had previously consumed.
With storage density at it’s current levels, the arrival of solid state disks, and the use of virtual servers it’s fair to imagine that we need to look at what a data center may evolve into, and quickly. Software Defined Networking products like Cisco’s ACI platform and VMWare’s NSX have the potential to further evolve the datacenter, being policy and object driven compared to the traditional cold and rigid command sets we’re all so used to. CPU and RAM are further being maximized and optimized through the use of virtualization, and products like VMWare’s vSAN have the potential to impact the storage market in just as profound a way.
With incredible backbone speeds and through wise use of SDN’s, virtual storage file systems and virtual compute, we are sure to see the data center as we know it, for many, cease to exist in its current form. It’s not unreasonable to conceive that the massive warehouse datacenters, and their accompanying disaster recovery sites simply die off as more aggressively cutting edge technology departments implement data center designs that treat the traditional data center more like an array of appliances, spreading the compute and storage throughout multiple closets within the organizations hallways. This new design and implementation will likely be driven by the goal to make critical systems inch closer to disaster resilience in favor of simple disaster recovery. For some the use of “Cloud Hosting” as a disaster recovery site indicate the infancy of this shift. IT departments who make this leap will see a reduction in cooling and electricity costs, no longer needing specialized air filtration and precise cooling and humidity control systems. The companies they serve will be able to reclaim the valuable real estate that was previously dedicated to server farms. Companies will be less likely to find themselves with a physical single point of failure, or a single physical attack location. The flexibility and agility of a distributed design for their data center allows for growth while potentially increasing the resiliency of the data center services, much like every other N+1 design implemented. Specialized power requirements for a design like this, utilizing existing equipment on the market, is simply not needed.
It’s true that this model does not fit every company right now, every organization has their own special use cases and needs and quality support staff will quickly be able to note where and how this concept can be used. I imagine that the first locations will most likely be educational facilities; higher-ed and K-12. Educational organizations are generally immune to the same scrutiny of what is considered critical as a profit driven organization and consequently are best suited to handle the risk. Educational facilities are often composed of multiple buildings, located in close proximity across a campus and they are often well connected with plenty of fiber, which is close enough to meet the low latency requirements that would be needed to implement such a design given current storage constraints. There are products on the market now which take up no more than 4 U’s of rack space, and are modular in their design in that they can house networking components, storage and serves. This design could lead to a brand new implementation design, using one of these blades to server as a router, one for switch and security component, another to serve as storage and another as the compute device. All of this is now able to be managed through virtualization technology where the entire set of services within that pod can be swiftly migrated to another pod.
I look forward to the creative and unique ways organizations will be able to implement optimized infrastructure designs in our hyper connected world. The distributed data center is coming, in one form or another, and we should all be excited for the opportunities that this new design will bring.

Subscribe to Industry Era



 

Events