You are currently browsing articles tagged NFD10.

I finally had the chance to finish watching all of the Arista videos from Networking Field Day 10.  They did quite a few presentations and if you haven’t watched them yet I would recommend you do…

EOS Evolution and Quality


CloudVision Overview

7500 Series Architecture

Leaf SSU Demo

While the bulk of the videos talked about Arista platforms or features, Ken Duda’s presentation (EOS Evolution and Quality) really struck a chord with me.  Early in the presentation Ken summarizes the problem by saying “when the network ain’t working, ain’t nothing working”.  The software powering your network just has to work and it has to work consistently.  Ken then goes into where he thinks quality should come from breaking it into 3 pieces.

Culture – It’s obvious from Ken’s talk that Arista legitimately cares about software quality.  While I don’t think this is unique to Arista, I think it’s something they’ve fully embraced because they can.  Arista was born out of the commodity network era.  When you are working with commodity chips, the real differentiator becomes software.  This puts them at a unique position compared to other traditional vendors who have long focused on custom ASICs and other proprietary components.  So while Arista is certainly a networking hardware vendor, they have their roots strongly planted in software.  Ken makes it clear that the company culture makes software a focal point.  And to that end, they don’t allow business pressures to influence the quality of their products.

Architecture – Arista believes, and I agree, that having a pure Linux foundation to their product is a major win.  Rather than modify the kernel to their specific needs, they’ve embraced the Linux philosophy and community.  What does this mean?  That their switches can run native Linux applications.  I’ll be careful to note here that this doesn’t imply that you can run whatever you want.  Like any other Linux platform, certain applications depend on certain kernel functions.  However, I have found that the Linux implementation on EOS has been VERY close to what I expect from a standard Linux distribution.  Ken goes on to explain that they don’t mess with the kernel and that their code lives in user space processes.  He also believes that the Arista approach to SysDB is a significant advantage to EOS.

Testing – It makes sense that Arista would have a large focus on software testing given their emphasis on software.  However, I learned some interesting things about how Arista does testing.  Ken claims that Arista doesn’t have a software QA team.  While at first that seems crazy, Ken explains that this makes total sense.  Arista pushes testing back on the software developers requiring developers to provide automated tests that prove that any new code works as it should.  All of their testing is 100% automated.   The testing runs every test, every case, for every feature, on every platform, for every release.  Ken claims that this means that their software only gets better as you upgrade in a given software train.

Needless to say, I was impressed with their presentation.  It seems that Arista has well defined values that they aren’t willing to compromise.  With Arista’s focus on software, they will certainly be an interesting company to keep an eye on going forward.

Tags: ,

Let me start this out by saying that I was thrilled to see Intel present at a NFD event!  While Intel is well known in the network space for their NICs, they are most well known for their powerful line of processors, boards, and controllers.  Most would argue that this doesn’t make them a ‘traditional’ network vendor but, as we all know, things are rapidly changing in the network space.  As more and more network processing moves from hardware to software the Intel’s of the world will have an increasingly large role to play in the network market.

Check out the following presentations they gave at the recent NFD10 event…

Intel Intro and Strategy

Intel Open Network Platform Solutions for NFV

Intel Software Defined Infrastructure: Tips, Tricks and Tools for Network Design and Optimization

Here are some of my thoughts on the presentations that I thought were worth highlighting…

The impact of software and NFV
Intel made some interesting observations comparing telco companies using big hardware to Google using SDN and NFV.  Most telco companies are still heavily reliant on big, high performance, hardware driven switches that can cost into the 10s of millions of dollars.  On the flip side, companies like Google have spent the better part of the last decade figuring out how to deliver similar services and functions in software running on generic hardware.  The cost savings are clear. The claim was made that the cost to move bits from point A to point B can be up to 10 times more for a normal Telco than Google.  The performance gap between hardware and software forwarding is closing quickly.  As this gap closes we’ll also likely see the price of higher end routing and switching platforms drop significantly.  The point was also made that performing these functions in software gives you significantly greater network agility.  If you need a new feature, you don’t necessarily need to wait for new hardware that supports the new software functions.

Software Optimizations
I’ve known for some time that Intel was doing work in the SDN and NFV space.  What I didn’t know ,or realize, was that they’re leaders in this space.  If you watch all the NFD10 Intel videos, it becomes obvious that Intel is making a serious investment in the network space.  On top of that, they’re not doing any of this behind closed doors.  They’re working with the open projects and contributing a lot of their advances directly to the community.  Some of the statistics from the presentation are just staggering.  The newer Xeon chips have 16 cores in them (when did that happen?  Wow).  The advancements that Intel introduced with DPDK enable packet processing at rates 25 times faster than a typical Linux distribution.  Intel’s goal (which it sounds like they just hit) was to do 40 Gb/s soft packet switching with a 256 byte packet. 

I’ll repeat my earlier sentiment.  Intel presenting at a NFD was huge! I think this is truly a sign of the times and a direction we’ll continue seeing networking as a whole moving towards.  The benefits of moving functions from hardware to software (or hardware assisted software) is something we can’t ignore.  I truly hope that Intel comes back to another NFD event!

Tags: ,

I just got done watching all the Nuage Networks videos from Networking Field Day 10 (NFD10) and I’m quite impressed with the presentation they gave.  If you haven’t watched them yet, I would recommend you do…

Nuage Networks Intro

Nuage Networks Evolution of Wide Area Networking

Nuage Networks Onboarding the Branch Demo

Nuage Networks Application Flexibility Demo

Nuage Networks Boundary-less Wide Are Networking

Here are some things I thought were worth highlighting…

A consistent Model
What I find interesting about Nuage is their approach.  Most startup networking companies these days limit their focus to one area of the network.  The data center is certainly a popular area but others are focusing on the WAN as well.  Nuage is tackling both. 

I heard a couple of times in the presentation statements like “users are stuck in the past” or “the network model has to be consistent”.  The problem with any overlay based network solution is that ,at some point, you need to connect it back to the ‘normal’ network.  Whether that entails bridging a physical appliance into the overlay, or actually peering the physical into the overlay, the story usually starts to get messy.  What if everything from the DC to the branch was on one network model?  Managed with the same set of policies?  Nuage is suggesting that’s possible today with their solution.

Cost and Complexity
Another item they talk about was surrounding WAN technology as a whole.  Having an MPLS WAN is not a cheap or easy proposition.  The upfront cost of hardware is often tremendous even without considering the monthly costs of the circuits.  The unintended byproduct of all of this is that you’re WAN technology roadmap is now in the control of the carrier.  You get new features and functions as the carrier implements them in their network.  Trust me, this isn’t something that happens frequently.  If you need new network functionality your only option ,in most cases, is to deploy new endpoints and use the carrier as transport.  In some cases, there’s just no way around this.  If you need the SLAs and bandwidth offered by a dedicated carrier circuit, then you might be stuck.  If not, you could start looking at using the internet as transport.  In either case, Nuage can build that overlay network for you.  And from the looks of it, Nuage has spent quite a bit of time making sure that their product is easy to deploy by providing you with multiple provisioning options.

Without a doubt, the best part of the Nuage presentation was their discussion on using Docker as part of their platform.  I mean, I almost jumped out of my chair when this slide came up…

This makes me happy on so many levels.  And their use cases?  Fantastic!  The example of using a container called ‘Clive the user’ for testing was awesome.  How many times have you deployed a network change late at night but not been able to test it?  What if you could work with a developer to build application test containers that would simulate live user traffic?  The model now changes from “make the change and hope no one screams tomorrow” to “make the change, deploy the container, and know for sure it worked as expected”.  Did I mention that the docker pull/run is all done through the Nuage management console?

The bottom line is that we want (need) this and Nuage is the first vendor I’ve heard of that has done anything remotely close to it. We want the ability to deploy applications to network gear and Docker is the best, and safest, way to do that in my opinion.  I’m not the only one thinking this though, check out these posts from my friends Matt and Brent…

Matt Oswalt – Docker for NetOps

Brent Salisbury – Building network tools using Docker

Overall, I’m impressed with what I’ve seen.  Nuage certainly has some new ideas here and I was thrilled to see them integrate a platform like Docker into their solution.  I think there’s a lot to be said for a vendor that’s willing to let you run your own app (container) on their dedicated network appliance.  I do have some reservations/questions around scale and complexity but only because they weren’t explicitly covered in the NDF10 session.  More reading will need to be done on that and hopefully I can find some time to play around with it. 

Tags: , ,