MPLS 101 – The Basics

In this series of posts, I want to spend some time reviewing MPLS fundamentals.  This has been covered in many places many times before – but I’ve noticed lately that often times the basics are missed or skipped when looking at MPLS.  How many “Introduction to MPLS” articles have you read where the first step is “Enable LDP and MPLS on the interface” and they dont actually explain whats happening?  I disagree with that being a valid starting point so in this post I’d like to start with the basics.  Subsequent posts will build from here as we get more and more advanced with the configuration.

Warning: In order to get up and running with even a basic configuration we’ll need to introduce ourselves to some MPLS terminology and concepts in a very brief fashion.  The descriptions of these terms and concepts is being kept brief intentionally in this post and will be covered in much great depth in a future post.

Enough rambling from me, let’s get right into it…

So what is MPLS?  MPLS stands for Multi-Protocol Label Switching and it provides a means to forward multiple different protocols across a network.  To see what it’s capable of, let’s dive into a real working example of MPLS.

Note: I encourage you to follow along with the examples by using virtual routing instances.  I’ll be using the vMX from Juniper which is free to try and full featured.  You can check it out here.

Above we have a very simple network topology.  Four routers connected together in a chain with two clients (apparently people pointing their fingers in the air) at either end of the chain.  At this point, the routers have a simple configuration that includes their interface IP addressing and that’s about it.  The configurations look like this…

So nothing exciting here – what we currently have is a fairly broken IP network. None of the routers or clients can communicate to anything besides their directly connected interfaces (perhaps the reason the clients are raising their hands). To be fair, the clients can at this point talk to their directly connected router’s interfaces since they are using them as their default gateway. But they can’t talk past there.

So how do we fix this? Well typically we’d just configure something like OSPF on all of the routers, let them advertise their connected networks to each other, and things would magically work. That’s all well and good, but that also means we need to share a lot of state with all of the routers. Each router has to learn about all of the prefixes in the network. Certainly in this case, that’s not a concern, but what if this network was 100 million times bigger? If it was – it might be worth while to look at other options to keep the amount of state being shared between the devices as small as possible especially in the case of routers 2 and 3 which aren’t even directly connected to a client. However – without knowledge of all possible networks, we’d need to find another means to forward traffic instead of relying on standard IP forwarding. Enter MPLS.

MPLS offers a new means of forwarding traffic that relies on labels rather than IP addresses to perform forwarding actions.  So rather than relying on doing IP lookups at each hop, the router will perform a label lookup.  MPLS tags are inserted in between the layer 2 header and the IP packet…

Given the tag placement, MPLS is often said to use a ‘shim’ header.  While the MPLS header includes more than just a label (tag) let’s just focus on the label for now.  We’ll revisit this later on, but the fact that this is a shim header is important to understand.  Many people incorrectly believe that MPLS represents a totally different means to transport data across a network.  That’s not the case at all.  Since the L2 header is still the outer header that means that we’re still using L2 forwarding semantics to get the packet from point A to point B.  The MPLS header is just additional information that can be acted on once a device discards the L2 header.  This is the beginning of a longer theme about how tightly coupled MPLS is to the underlay network it rides on top of.

So now that we have a label we need to do something with it.  When an MPLS enabled router receives a packet – it can perform three basic actions.  It can push a label, swap a label, or pop a label.  Pushing a label would occur when you wish to have traffic enter the MPLS network.  Swapping happens inside of the MPLS network and represents the basic forwarding action MPLS relies on.  Popping a tag occurs as traffic leaves the MPLS network and egresses back onto a normal IP network.

Note: These are certainly not the only times these actions are used but for the sake of keeping things simple in this intro post let’s assume they are.  We’ll cover more advanced use cases in later posts.

So now that we know some of the basics – lets get some of the configuration done and then come back and explain it.

The first thing we have to do is configure the routers interfaces for MPLS. To do so on a Juniper router you configure both the physical interface for MPLS as well as specify the interface under the MPLS protocol configuration. Our router 1 configuration would now look like this…

Notice the highlighted lines above. This enabled these interfaces for MPLS transport. Let’s make the same changes to the rest of the routers to enable their transit interfaces for MPLS…

Once again – nothing exciting about this. All we’ve done at this point is enabled the interfaces for MPLS transport.

Note: This is where things diverge drastically between vendors, specifically Cisco and Juniper. Since I’ll be using the vMX for this series of posts we’ll be talking about how this is handled with Juniper. While many things are similar, and they certainly interoperate, just keep in mind that the default behavior for many things is different between the two vendors.

So let’s take a step back here and talk about how this works with IP. If the top client is using router 1 as it’s default gateway, when it sends traffic destined to the bottom client, router 1 has to sort out what to do with it. In the case of normal IP forwarding the router would look at the incoming packets destination IP, consult it’s routing table, hopefully find a prefix that matches the destination IP of the packet, and forward the packet on to the next hop the route specifies. If we wish to forward the top clients packet using MPLS transport, we need a means to tell the router to change it’s normal forwarding behavior. In other words – the router needs some mechanism to tell it what traffic should be sent using MPLS tags. That mechanism comes in the form of a LSP – or a label switched path. A LSP defines a path through an MPLS network. For the sake of keeping things simple as we begin our MPLS labs we’ll rely on statically defined LSPs. Static LSPs are defined under the MPLS protocol section of the configuration as follows…

Above we have a new LSP called router1->router4. The LSP defines…

  • A method of ingress meaning that this will be the entry point of traffic entering the LSP.
  • A destination using the to definition. In this case the LSP is headed to or terminates at 4.4.4.4 which is the loopback IP address of router 4.
  • A action of push meaning this router will push a label onto the packet
  • A next-hop of 10.1.1.1.

Most of these items should make at least some sense to you.  We know we want to use MPLS to forward the traffic so the ingress method makes sense.  We know that we want to get the traffic all the way to router 4 since that’s what the bottom client is directly connected to so saying that we’re heading to 4.4.4.4 also seems to add up.  We also talked about how when we enter a MPLS network we need to push a label so I’m also OK with making that conclusion.  What’s curious is the next-hop of 10.1.1.1.  Why do we need a next hop IP if we aren’t relying on IP forwarding to move the traffic?  The next-hop is required so we know what interface to send the traffic out of.  While slightly deceiving, the IP address you specify for the next hop is used to resolve what interface should be used to send the labelled traffic out of.  The router will consult the IP routing table, determine which interface is used to reach that IP, and then use that as the egress interface when sending traffic down that LSP.  Remember, MPLS isn’t using the IP header to make any forwarding decisions.  The fact that you specify an IP for the next-hop is simply a matter of convenience and is then resolved to a next-hop interface for the local router.

So let’s install this static LSP on router 1.  If you want to copy and paste the configuration load merge terminal is your friend here.  So now that we have defined the ingress to our LSP, we need to define the rest of the path.  We just told router 1 to push a label of 1001001 onto the packet as it enters the LSP.  We also told it to send the newly labeled packet over to router 2.  When router 2 receives the labeled packet it needs to perform a MPLS forwarding operation.  In this case, it will be a swap.  Let’s tell router 2 to swap the label 1001001 with 1001002

The above label operation will be part of the same LSP we started on router 1 and defines…

  • A method of transit meaning that this router will not be starting or terminating an LSP. Rather we’ll be transiting traffic across the router
  • A action of swap meaning this router will examine the incoming label and if it matches the label defined in the method (1001001) it will swap the label for 1001002
  • A next-hop of 10.1.1.3. As we saw above this means that it will send the relabeled packet out of its interface towards router 3.

Now our MPLS packet is making it’s way to router 3.  Router 3 will need to deal with the MPLS packet as well, however, its not going to perform a swap operation.  Rather, it’s going to perform a pop operation.  This might seem strange at first since there is another router (router 4) in path to reach the bottom client, however we want to try and minimize the operations each router has to do.  If we sent a packet with an MPLS header to router 4 it would not only need to pop the MPLS tag, it would also need to do an IP lookup.  So rather than making router 4 do all that work, we simply tell router 3 to pop the label off and send the packet on it’s way to router 4 who can then just act on the IP packet alone.  This function is called PHP or penultimate hop pop and is very common in MPLS networks.  So let’s configure this on router 3…

Once again – the above label operation will be part of the same LSP we started on router 1 and 2 and defines…

  • A method of transit meaning that this router will not be starting or terminating an LSP. Rather we’ll be transiting traffic across the router
  • A action of pop meaning this router will examine the incoming label and if it matches the label defined in the method (1001002) it pop off the label before forwarding
  • A next-hop of 10.1.1.5 to send the packet toward router 4.

Now that we’ve defined the LSP we need to tell the router to use it to get to the bottom client.  Let’s look at our routing table now on router 1 and see what it looks like…

Notice that in addition to the normal inet.0 routing table we now also have a inet.3 routing table.  Where did this come from?  When we defined the ingress LSP ending at 4.4.4.4 the router created an entry for 4.4.4.4 in the inet.3 table.  The table is often called by other names, but it’s a special table in Junos that is used to lookup BGP next hops.  Recall that BGP is unique in the fact that it can put a next hop in the routing table that is not directly connected.  To understand how this is used with MPLS we need to talk briefly about some more MPLS terminology that will be discussed in more detail in later posts.  The inet.3 table is sometimes also called the FEC mapping table.  So what is a FEC?  FEC stands for Forwarding Equivalence Class and defines a group of packets that are sent to the same next-hop, out the same interface, using the same behavior (think QOS etc).  FECs aren’t unique to MPLS.  In standard IP routing FECs are defined the same way, the difference is that they are defined on a hop by hop basis.  In MPLS, since we build LSPs across an MPLS cloud (virtual circuits) the FEC is the same end to end.  So how is this different than an LSP?  An LSP is a generic path through an MPLS cloud.  Many FECs could use the same LSP.  That is a FEC with a lower priority could use the same LSP as one with a higher priority.  So LSPs define the path of the virtual circuit while FECs define more granular policy on a more specific set of classified flows.

So why is the inet.3 table also called the FEC mapping table?  Because its used to determine the FEC for a given packet flow.  If we look at the routing table above we can see that the inet.3 table shows an entry for 4.4.4.4/32.  That’s the loopback of router 4 and the endpoint for the static LSP we defined.  It lists the other information we need to get to that endpoint.  Namely, the next hop (which resolves to the egress interface), and the MPLS label operation that we need to use to get into that FEC or LSP.  The problem we have now is that while we know how to get there, we dont know how to get traffic into the LSP.  Note that the inet.0 table still lacks an entry for the subnet of the bottom client (10.2.2.2/31).  So how do we fix that?  Well typically BGP would advertise the remote prefix for us (we’ll see this in a later post) and BGP would lookup the next-hop in the inet.3 table which would then put our entry into the inet.0 table.  Since we are doing everything manually, we can still make this happen with a static route.  For instance we can put this config in router 1…

Much like BGP would do for us, we’re telling router 1 here that the route to reach 10.2.2.2/31 is reachable through 4.4.4.4. Since 4.4.4.4 is not directly connected we need to tell the router to resolve 4.4.4.4 into a usable next hop.

Note: Another difference between IOS and JunOS is that in JunOS you need to tell the router to recurse (resolve) a route.  IOS does that automatically.  

When the router resolves the route, it consults the inet.3 table just like BGP would. When it does that, it finds an entry for 4.4.4.4 because of our static LSP. Now if we look at the inet.0 table we’ll see a usable entry to reach the bottom client…

Success! At this point, we should be able to see some of this in action. Let’s start a ping on the top client toward the bottom client and take a couple of packet captures alone the way.

Here is the ICMP request on the wire before it gets to router 1. Looks like a normal frame and packet…

Now below is the same frame and packet as it traverses the link between router 1 and router 2. Notice that addition of the MPLS header with it’s label of 1001001.  Based on the ingress LSP we configured on router 1 this makes total sense.  Also notice that in the ethernet (L2) header the type has changed from 0x0800 (IPv4) to 0x8847 (MPLS).  This is how a receiving router knows to process this frame as an MPLS datagram…

If we capture the frame again on the link between routers 2 and 3 we’ll see the MPLS header has changed to reflect the new label.  Also notice that the MPLS TTL has been decremented.  Since we’re dealing with a new forwarding paradigm here (not IP) we still need a means to keep track of TTL and MPLS acts in much the same way that IP does for TTL – decrementing it at each hop…

The capture between routers 3 and 4 is more interesting because, as you can see, we no longer have an MPLS header.  This is the PHP action I mentioned earlier.  Router 3 pops the MPLS header off before sending it to router 4 so that all router 4 has to do is perform an IP lookup…

And lastly we can see the normal frame and packet making it all the way to the bottom client as we’d expect using a normal L3 forwarding mechanism…

What the capture on the link between the bottom client and router 4 will also show is router 4 generating ICMP unreachable packets back to the bottom client (not pictured).  When the bottom client attempts to return the traffic to the top client it sends its traffic to router 4.  Router 4 has no means to reach the top client at 10.2.2.1 since it’s forwarding table has no entry for it…

So this is strange. Why cant the return traffic from the bottom client use the same LSP that the top client used? It’s because LSPs are unidirectional. At this point our MPLS LSP looks like this…

When we defined the LSP on router 1 we defined an endpoint for it to use as router 4. To get return traffic to work we’ll need to define an LSP in the other direction as well.  So our LSPs will look like this…

Above we now show bidirectional LSPs.  With this configuration, the top and bottom client should be able to communicate normally.  The additional configuration for the second LSP on each router is shown below (again, load merge terminal is your friend here)…

Notice how the router 4 configuration also includes the static route to get the traffic into the LSP.  At this point, the top and bottom client should be able to communicate to one another successfully.  And if we look at the routing table of router 2 or 3, we’ll see that they still have no clue about either clients subnet…

Now we’ve just scratched the surface of MPLS and if you’re new to MPLS Im sure you’ll still have many many questions. Hang in there – in the next post we’ll talk about LDP and BGP and how they work in conjunction with BGP. Comments and questions welcome!

Over the last several years I’ve made a couple of efforts to become better at Python. As it stands now – I’d consider myself comfortable with Python but more in terms of scripting rather than actual Python development. What does Python Development mean? To me – at this point – it’s means writing more than a single Python file that does one thing. So why have my previous efforts failed? I think there have been a few reasons I never progressed much past scripting…

  • I didn’t have a good understanding of Python tooling and the best way to build a project. This meant that I spent a large chunk of time messing around with Python versions, packages, and not actually writing any code.  I was missing the fundamentals in terms of how to work in Python.
  • I didn’t have a real goal in mind.  Trying to learn something through ‘Hello World’ type examples doesn’t work for me.  I need to see real world examples of something I’d actually use in order to understand how it works.  I think this is likely why some of the higher level concepts in Python didn’t fully take hold on the first few attempts.
  • I got stuck in the ‘make it work’ mindset which led me to the copying code snippets kind of mentality.  Again – not a great learning tool for me.
  • As a follow up to the previous point – I also got stuck on the other end of the spectrum.  The ‘Import this and just use it without knowing how it works’ idea didn’t sit with me causing me to try and comprehend all of the code in other packages and modules (Im a detail person.  I need the details).  In some cases this is beneficial but I’m now also of the opinion that some packages you use can just be used without reading all of the code.  Trying to understand code written by Python experts instead of just using it was a great way for me to spend hours spinning.  This boils down to the ‘walk before run’ concept in my opinion.
  • I completely ignored testing.  These is no excuse for this.

So this time I’m going to try something different.  I’m going to start from scratch on a new project and see if I can use what I perceive to be (hoping people call me out on things that aren’t) proper Python development.  The first few posts will cover what I perceive to be Python fundamentals. Things you need to know about setting up Python even before you start writing any serious code.

Enough talk – let’s start…

Virtual Environments

One of the first things that really clicked with me about Python development was the use of Python virtual environments.  If you’ve ever struggled with having multiple versions of Python on your system, and figuring out which one you were using, or which version had which package, you need to look at virtual environments. Virtual environments create a sort of isolated sandbox for you to run your code in.  They can contain a specific version of Python and specific Python packages installed through a tool like PIP.  There’s lots of options and possible configurations but let’s just look at some of the basics of using virtual environments.  To create a virtual environment, we need the virtualenv Python package installed.  If you don’t have it already, you can easily install it through PIP with pip install virtualenv.  Once installed, you can create a virtual environment rather simply by using the following syntax…

In this case – we created a couple of base folders to store our project in and then created a virtual environment called my_venv. If we look inside of our project directory we’ll now see that we have a folder called my_venv

If we dig in a little deeper we see a bin folder that contains all of the Python binaries we need. Also notice that there’s a file called activate in the bin directory.  If we source this file, our terminal will activate this virtual environment.  A common way to do this is from the base project directory as shown below…

Notice how the command line is now prefaced with (my_venv) to indicate that we are running in a given virtual environment. Now if we run a command like pip list we should see a limited number of packages…

Note that we don’t see the package virtualenv which we know exists on our base system (since we’re using it right now). We don’t see any of the base system packages because we’re running in a sandbox. In order to use certain packages in this sandbox (virtual environment) we’d need to install them in the virtual environment. Another less common option is to allow the virtual environment to inherit the packages from the base system. This would be done by passing the flag --system-site-packages when you create the virtual environment. We can demonstrate that here by creating a second virtual environment…

Note: To deactivate a virtual environment simply type deactivate into the terminal.

Notice that we have a ton of packages that the virtual environment perceives to be installed since it’s inheriting them from the base system. So what about the actual Pyhton version we’re using in the virtual environment? If not specified when created, the virtual environment will use the Python version used by the base system. That is – the one specific in your PATH or PYTHONPATH environmental variable. In my case, that’s Python 2.7 which we can see just by starting the Python interpreter from the CLI outside of the virtual environment…

And if we once again activate our virtual environment we’ll see that the versions align perfectly…

To use a different version, we need to tell the virtualenv command which version to use. To do that we can pass the --python flag. The trick to this is passing the path to exact Python executable you wish to use. So let’s dig around a little bit. Im on a Mac so most of my Python versions were installed through HomeBrew. So first let’s see which executable I’m using when I type python outside of my virtual environment…

Whoa. There’s a lot of versions there. The interesting thing is that these are mostly all symlinks pointing to other places. For instance…

Ok so we can see that these are really pointing to HomeBrew install locations. So if we wanted to – we could tell the virtualenv command to point directly to these exact destinations. Or, since the symlinks are there and work, we can just reference the version we want to use so long as it’s resolvable through the PATH variable. For instance…

So as you can see – it just needs some way to find the Python executable you wish to use. In both cases, we ended up with the same version of Python (3.6) in the virtual environment. Nice! So now let’s talk a little bit about package management.

Package Management with PIP

Installing packages for Python is drop dead easy. You can use pip and simply pip install whatever and it works awesome. The problem is – how do you keep track of all of this? Better yet – how do you keep track of what version of what package you are using? If you’ve poked around Python projects for any length of time, you’ve likely seen a file called requirements.txt. This file keeps track of the Python packages and their versions that you need to make the code you downloaded run. Pretty easy right? But tracking these manually can be a sincere pain and something often forgotten. Luckily, PIP (not sure if this should be capitalized or not all the time (I’ll just keep flipping between caps and no caps)) has a feature that can create this for you called freeze

In the above output we installed a package in our virtual environment called pyyaml. We then used the traditional pip list command to show that it was installed. However – we also got info about the default Pyhton packages which we dont really care about since, well, they’re default. When we use the pip freeze command we get only installed packages and in a slightly different format (notice the used of ==). To store it in a file we can simply > it into a file as shown below…

Ok – so now we have two packages in our requirements.txt file. This is great and all, but what do we do with this? Using PIP we can install packages based on this file like so…

Notice here that we first deactivated the virtual environment we were in, then we activated another, showed that only the base packages were there, and then we used the requirements.txt file to install all of the packages we needed through PIP. Pretty slick right?

So now we know how to handle Python versions and packages inside of virtual environments. This is powerful by itself, but there’s even more ways we can optimize this. We’ll build on this in our next post. Stay tuned!

I’ve been spending more time on the MX recently and I thought it would be worthwhile to document some of the basics around interface configuration.  If you’re like me, and come from more of a Cisco background, some of configuration options when working with the MX weren’t as intuitive.  In this post, I want to walk through the bare bone basic of configuring interfaces on a MX router.

Basic L3 interface

The most basic interface configuration possible is a simple routed interface. You’ll note that the interface address is configured under a unit. To understand what a unit is you need to understand some basic terminology that Juniper uses. Juniper describes a physical interface as an IFD (Interface Device). In our example above the IFD would be the physical interface ge-0/0/0. We can then layer one or more IFL (Interface Logical) on top of the IFD. In our example the IFL would be the unit configuration, in this case ge-0/0/0.0. Depending on the configuration of the IFD you may be able to provision additional units. These additional units (Logical interfaces (IFLs)) can be thought of as sub-interfaces in Cisco parlance and would be identified by VLANs just as sub-interfaces are. However, in our current configuration, the MX thinks this is a standard L3 interface so it will not allow us to configure additional units…

Basic L3 interface with VLAN tags

As we mentioned above, a default L3 interface will only allow you define a single unit, unit 0. If you wish to define more units you have to enable the IFD to do this by providing the vlan-tagging configuration. This will allow the interface to handle single dot1q tags. Once the IFD is configured you simply provide the vlan-id to use for each IFL under the unit configuration. I’ll point out here that it is not a requirement for the unit numbers to correlate to the vlan-id but it is good practice to match these up.

Basic L3 interface with QinQ VLAN tags

In order to support QinQ vlan tagging we need to change the configuration on the IFD to stacked-vlan-tagging. However, in doing so, we break any IFL configuration that used the vlan-id parameter. The fix for this is to instead use the flexible-vlan-tagging option at the IFD which will allow both configurations to coexist…

Basic bridged interfaces

In this example we are simply bridging two interfaces together. In this case, the MX will treat these interfaces as access ports and simply switch frames between them. The interface configuration is straight forward as we define each interface to have an encapsulation of ethernet-bridge. In addition, it is required that each interface has a unit 0 definition. Notice that in addition to the interface configuration we must also define a bridge-domain and specifically add each interface which we want to participate in the domain.

Basic bridged interfaces with VLAN tags

In this example the ge-0/0/1 interface is VLAN aware and the ge-0/0/0 interface is still acting like an access port. The bridge domain configuration ties these two ports together meaning that a device connected to ge-0/0/1 passing a VLAN tag of 15 will be able to talk to the device connected to the access port.

IRB interfaces with VLAN tags

Here we provide a VLAN interface (Known as a SVI in Cisco land) by utilizing an IRB (Integrated routing and bridging) interface. The IRB interface is assigned to the VLAN by mapping it into the bridge domain as a routing-interface. Traffic that comes into interface ge0/0/1 with a VLAN tag of 15 will be able to reach the IRB interface of 10.20.20.254.

In the next post, we’ll dig further into this by discussing the specifics of bridge domains, learning domains, and VLAN mapping. Stay tuned!

« Older entries