Network Virtualization: ACI vs. NSX is about ASIC vs. Software

Network Laydown

I’ve been thinking a bit more about the Cisco ACI vs. VMware NSX debate.  It brings about a larger question.  It’s not just ACI vs. NSX but Application Specific Integrated Circuits (ASIC).  When I started programing back in high school I had this theory about x86 hardware.  There was never such a thing as under powered hardware it was always an issue of poor or uncreative programing.

As a teenage programmer, I had a state of the art Tandy 1000 with a 7Mhz processor.  I got into writing animation software on the system.  Before I understood what dedicated graphics cards were all about, I would chalk the under performance of my programs to my bad programming.  I always found ways to increase the performance of my applications.  If the GW BASIC compiler created too much overhead then I learned to write straight to the memory registers.  I never blamed the hardware for application performance.  I didn’t understand why you couldn’t over come the limitation of the hardware with better software. 

I also, never understood why console games were better at any stage than PC games.  If you had blazing fast x86 hardware then you can just needed to be a bit creative in programming.  Then I grew up and became a network engineer.  I learned quickly that an ASIC could perform a dedicated operation much faster than a general-purpose piece of compute hardware.  ASIC are efficient at a designed set of tasks.  So, that’s why I couldn’t just use my Novell 3.11 server as a router for an entire network segment in 1998.

The main disadvantage to ASICs is that they can’t be easily upgraded to add additional capability.  I couldn’t take my dual port Cisco 2600 router and add deep packet inspection.  At least it wouldn’t be as good as my Novell 3.11 server at that specific task.  I’d have to buy a new router to gain the additional functionality at an acceptable performance level. 

This is basically the argument of Cisco ACI vs. VMware NSX.  Cisco’s consistent argument is that networking even if it’s virtualized is a very specific set of tasks therefore; the ASIC approach is always better than the software approach.  VMware’s argument is that software is much more flexible and allows you to do more.  Additionally, VMware’s argument is that just like x86 virtualization, x86 virtual networking is good enough for almost any use case.  In other words fast is fast enough.  VMware’s argument is partially that ASICs aren’t that much faster or at least not so much so that you’d sacrifice the flexibility.  VMware’s other argument is that they can partner with dedicated hardware providers to achieve the performance needed for most use cases.

Well, now you’ve seen my bias.  I’m a big believer in software but that doesn’t mean I’m convinced that VMware’s NSX is the way to go for network virtualization.  Once Cisco gets their solution off the whiteboard and into the hands of real customers we can begin the real debate. 

Published by Keith Townsend

Now I'm @CTOAdvisor

17 thoughts on “Network Virtualization: ACI vs. NSX is about ASIC vs. Software

  1. Cisco want to sell hardware, ergo ASIC argument makes sense to them. They are weak at software development and software user experience. Their IOS is solid, no doubt, but it’s so low-level that’s it’s more analogous to machine code.

    Hardware speed isn’t a differentiator for 80% of businesses (I invented that figure). It is absolutely key to service providers, like Telcos. The exceptions like Google, AWS and Facebook don’t count in this space, because Cisco and VMware don’t sell to them since those guys invented their own stuff.

    So Cisco’s future addressable market is shrinking slowly to Telcos, who care about the way Cisco does things, and VMware have just started a 5 to 10 year assault on the enterprise (which is Cisco’s current bread and butter). AWS are assaulting it too.

    I don’t know what Cisco will look like in 10 years, but it won’t be a switching and routing company that competes on ASICs.

    1. I think you should look up NFV (Network Function Virtualization) if you think nothing is going to change in the telco space.

      I’m not a 100% sure but my guess is the largest part of the Cisco hardware business is switches and routers. Switches and routers will remain, but it might be a more commudity market when part of the smarts are at a controller or handled by the hypervisor host. Especially if we end up with overlays. Which wouldn’t be unrelastic, because they can be deployed on existing hardware and are virtual, so free.

      But because no vendor has perfected that with proper congestion control and most of them deligated multipath to the switches and routers so you need’ll to have enough spare capacity. We’ll have to see how many do.

      It is especially the middle-box vendors (loadbalancers, firewalls) that should be worried.

      If you think about how people will be rolling out networking for projects in the future, it makes the most sense to make a small virtual network per project per tier. You know the usual 3-tier application network: (some random image Google found). Because it’s virtual, it’s cheap to place virtual firewall in between.

      But that also means something interresting will happen. This will lead to lots of small firewalls. Small firewalls are much, much easier to understand and manage, because the rule sets are small. I think the application developers will just end up including a file with their application what ports need to be more to configure it automatically.

      So while a lot of the middlebox vendors were already running on x86 servers, making them virtual would be easy. But less people will really need them, the virtualization/cloud platform will include tooling to set up the simple firewalls and that might be enough for a very, very large segment of their previous customers base.

      Recap: for Cisco, I think it’s switching/routing against overlays, smart software and commodity hardware (you can dedicate a CPU to a single virtual networking task, it’s cheaper than buying from a hardware vendor).

      1. I should add: commodity hardware (x86 machines) gets more ASICs-like by the year. And Intel is a network chip vendor these days as well. Maybe Intel can even dictate what the new architecture will be like.

      2. Individually, small firewalls are easy to manage. In the aggregate, they are not. It doesn’t have to be that way, but it is with current management products – Cisco Security Manager, Provider-1, FortiManager, Panorama, etc., aren’t ready to scale like VMs do. SOAP API, what’s that? It’s what’s going to keep 3-tier app structures around instead of replacing them with per-VM firewalls on a flatter network. Puppet integration with Juniper has my interest piqued, but I’m going to have to wait on a hardware refresh before I can see how it works in the real world.

        I think to some extent it’s going to be the same for routers and switches – 1000 tiny ones are more difficult to manage than 100 large ones, and both are probably easier to manage than 50 that span DCs – and the management tools are only slightly better than the firewall tools, if that (CiscoWorks, JunOS Space, etc. are all clunky, but in different ways). They’ve got a lot of the same API issues as well, and heaven help you if you support multiple vendors in the same DC.

        IMO, where network virtualization and vendors end up in 5 years greatly depends on the effort put into management tools. ASICs vs. commodity isn’t very important unless ASICs regain a significant performance edge over generalized hardware in that time period. Though it might be a good pitch to shareholders who still think hardware matters and their exorbitant profits can last forever.

    2. So, a follow-on question to you guys. The one true disadvantage to VMware’s approach vs. Cisco’s approach is non-hypervisor based workloads. Cisco’s solution while dependent upon a single hardware vendor allows for independence from the hypervisor stack and allows for programmatic features of their SDN solution being applied to physical workloads.

      For NSX to “virtualize” a physical network port that doesn’t have a hypervisor on the other side of it they would need to team with a physical network hardware provider. Once you have the requirement for this integration don’t you lose the advantage of software abstraction?

      1. If you had watched the videos I send you on Twitter yesterday, you would know NSX has support for multiple L3 and L2 gateways. So you can connect it a VLAN for example. I don’t see a big problem.

      2. Not the same thing. The gateways allow for communication not virtualization of the port. It’s a big difference. I can’t turn the port 3 on of a top of the rack switch to a firewall port vs have it on the same vlan as a specifics virtual network.

      3. OK, let’s be clear here.

        What do you mean with firewall port in this case ?

        Do you mean, connect a firewall appliance to the physical port ?

      4. No that the port itself can become a firewall port of a virtual FW because the thing connected to it isn’t trusted or part of a DMZ attached to a virtual network. A Firewall is just one example. I should be able to make a physical port any layer 3 device I need it to be in full virtualization.

      5. I think you answered your own question with “the port itself can become a firewall port of a virtual FW” – there’s a virtualization layer, so it’s taking advantage of it, whether it’s provided by NSX, UCI, OpenStack, or provisioned by vCenter, whatever Cisco provides, Puppet, etc. I can’t really envision a situation where you virtualize one side of the network but not the other (even when talking to “legacy” networks you still have two sides of the network, one of which is the virtual/legacy gateway), can you provide an example where you cut out the middle man?

      6. There is nothing preventing that in theory. Other than VLAN exhaustion on your switches.

        In my case I’m pretty far with creating a system for controlling the physical switches by pushing configuration changes to the switches. So if I write a driver for OpenStack Neutron, than it can control the physical switches we have in our network.

        So if you have that, you just create a new vlan and attach it to the virtual firewall port.

        It’s not that complicated. 😉

      7. In VxLAN parlance what you need is a hardware VTEP that can “virtualize” the physical FW port for you. Arista boxes for example, already supports this and partners with NSX. That doesn’t mean NSX loses it’s advantages, rather you have the best of both worlds. You do the virtualization at the edge, for servers this means you do it in the hypervisor and for physical devices you do it at the access switch.

    1. I’m not getting your point on the Nexus 1000v (which btw, no one really uses. A ton sold but never put in production).

      Cisco’s approach to SDN is ACI. ACI is the tool that they use to programatically control the network. It’s their stated alternative to network virtualization which is something that don’t believe in as an holistic approach. Their CTO did a video and blog post calling it SDN.

      They of course believe that NFV has it’s place. Their position is that it’s not the primary construct for the programmable network.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: