Infrastructure bricks are the future of enterprise data centers


I’ve always felt the current flavors of hyper-converged bricks are incomplete.  The focus is on storage and compute with no solid consideration for networking.  In my opinion to be considered a converged infrastructure you have to offer storage, compute and networking.  There hasn’t been a lot of press around vendors looking to break down the walls between all three areas of service in a single device.  Of they hyper converged vendors, Arista Networks has impressed me with its openness to being more than a network switch vendor.  And just in case you had to re-read the previous sentence, I’m calling Arista a hyper-converged vendor, not a network vendor.  I had a brief stint with one of the largest managed hosting providers in the world.  It were there, in 2010 that I got my first introduction to Arista’s interesting product suite that allowed administrators root access to the Linux-based management layer.

One of the engineers I worked with was really geeked and surprised that Arista allowed such low level access to their hardware.  I wasn’t to excited because I was focused on getting a data center up and running and their 10Gpbs product was exciting because it met the requirements at a very nice price point.

Arista is hyper-converged

Since these early days, Arista has teamed with both VMware and Coho Data in enabling network generation infrastructure services.  I’m close to labeling Arista, an infrastructure company.  They seem to get the future vision of infrastructures today.

Arista has embraced the basic concept of Software Defined Networking (SDN).  A great example is their partnership with Coho Data.  Coho Data’s solution leverages the ability to run network services on an Arista Switch.  The result is the ability to aggregate storage data flow using OpenFlow across multiple physical network ports.  Coho Data’s product in essence becomes a distributed network files system. Having the storage code living in the network enables this solution. Now imagine adding storage and major compute capability to these devices.  You get what I call infrastructure bricks.

Infrastructure Bricks

There will be a time when there will no longer be a difference between network hardware and server hardware.  In my opinion, the future data center will be comprised of infrastructure bricks.  These infrastructure bricks will allow you to add modules that may include storage or physical network ports. The enabler will be the management layer that enables converged services. This may be a hypervisor platform such as vSphere with plug-in modules such as NSX and Coho Data.

These infrastructure bricks will enable the software-defined data center (SDDC).  From a software defined network perspective, no longer will we debate what to do with legacy hardware, as these legacy components will be connected directly to infrastructure bricks.  All the granular capability such as layer 2 firewalling that’s easily implemented in virtualized networks could be just as easily implemented for ports connected to legacy nodes.  The ports at the end of the infrastructure brick will just be another resource managed by the SDDC controller.

State of the technology

Intel already has provided the foundation for this type of technology.  Their DPDK technology allows for the creation of open hardware switches.  Facebook has recently made a big splash with their open network switch.  We have Arista that has embraced the idea of running x86 code directly on their switches.  I believe it’s just a matter of time before we see a startup or vendor that introduces this disruptive platform.

Published by Keith Townsend

Now I'm @CTOAdvisor

2 thoughts on “Infrastructure bricks are the future of enterprise data centers

  1. You are right from a theoretical perspective. However … the difference between compute&storage versus networking is that the former is about sessions and the latter includes physical connectivity. What I mean with this is that compute and storage can be switched from one brick to another. End point connectivity cannot. Unless everything is going to be wireless we will always have dedicated switch hardware, even if it only was for endpoints where redundancy is impossible.

    1. The static nature of physical networking does limit what you can do from a physical connectivity perspective. I’m more interested in the natural integration between all three stacks when they are in the same platform.

      I’m excited at the possibility of what can be done with overlays, silicon and management when the entire stack can be managed on the same platform.

      You are correct that there are limits of what you can be down with data and i/o locality, but I think there are enough additional benefits including simplification of hardware infrastructure by leveraging modular bricks that become network connectivity devices just by adding plug-ins.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: