Routing

You are currently browsing articles tagged Routing.

So here’s another common question I see from other engineers.  Goes something like this…

"I ran a traceroute from the switch and I’m getting all sorts of goofy responses.  Lines that show multiple hops and hops that definitely shouldn’t appear where they do.  What’s going on?"

What they are referring to is the dreaded multiple response per hop output…

MLS# traceroute 192.168.5.1
Type escape sequence to abort.
Tracing the route to 192.168.5.1
1 172.172.172.26 464 msec
  172.172.172.22 360 msec
   172.172.172.26 80 msec
2 172.172.172.2 960 msec
   172.172.172.6 1004 msec
   172.172.172.2 1216 msec
3 172.172.172.14 1216 msec 1680 msec 1852 msec

So you might look at this and be entirely confused.  Or you might look at it and know that you probably should have ran the trace from a workstation rather than a MLS but didn’t know why you should have (The boat I was in a few years ago).  Let’s dig in and see why we get output like this. 

First off, you need to know traceroute works.  If you don’t (you should) I’ll explain it in quickly.  The machine generating the traceroute sends packets towards the destination incrementing the packets TTL with each new packet.  Basically, it starts with a TTL of 1.  The next hop router (the next layer 3 hop) get’s the packet, decrements the TTL by one and sends it on it’s way.  If the resulting TTL is 0, it can’t send it on it’s way so it instead returns a ‘TTL Exceeded’ message back to the host that sent the packet.  When the machine generating the traces get’s the response it sends another packet towards the destination with a TTL of 1 larger than the last packet.  This way, the host generating the traceroute get’s a response back from each layer 3 hop on it’s way to the destination.  Make sense?

So now that we know that, what’s going on with the output we are getting?  Basically, the host has two equal cost paths to the destination you are trying to reach.  At this point, a smart network engineer would say "Ok, but doesn’t the MLS use the CEF default of per destination load balancing?".  Ah ha!  You’ve hit the nail on the head here!  CEF defaults to per destination load balancing.  This ensures that traffic from the same source, headed to the same destination always takes the same path (saving us from out of order packets).  So that being said, this still doesn’t make sense.  If each probe takes the same path, then why are we getting two different paths at the first two hops? 

The answer is so simple it initially baffled me.  I can’t help myself here, so let me give you one more clue before I give you the answer.  If the same traceroute was ran from a standalone device plugged into this same MLS, we would get a traceroute with the same hop for each probe per TTL (in other words, a normal looking traceroute).  Did that help?

Ok, ok.  So the difference is that in this case, we are running the trace from the MLS itself.  That being said, we aren’t technically using the CEF table to do destination lookups.  Since the MLS originates the packets, it process switches  them.  When the traceroute probes are process switched, they default to the default process switching load balancing method which is per-packet.  Since the host we are generating the packets from has two equal cost paths, per packet load balancing sends one probe one way, the second the second way, and the third the first way again.  The same goes on for each subsequent hop.  If we generated the trace from a PC plugged into this MLS we wouldn’t have two equal cost paths.  The MLS does, but it’s just going to do per-destination load balancing on the trace probes that I send.  So I wont see different results for each probe.

Does that make sense?  The bottom line is that there is significant difference between a switch forwarding a packet, and a switch generating a packet. 

Tags: ,

Most of the time when we deploy an ASA 5505  to a client site, there is typically a single subnet behind the ASA that has all of the clients devices.  The ASA’s inside interface is defined as the default gateway for the hosts on the subnet and life is good.  Once you do enough of these type of deployments it becomes the norm and you forget that there are other network configurations available.  The most typical large scale deployment (enterprise level) involves creating what I like to call a ‘com’ or communication network.  This is almost a necessity to have when you have more than one edge device terminating connections.  For instance, take a look at the below configuration.  Pretty standard right?  One ASA with a single subnet behind it. 
image

This works great in some deployments, but it has its flaws.  For one, what happens when you add more external network links?  Say the company outgrows their site to site VPN solution and needs to put a dedicated point to point circuit in between two offices.  Since the ASA isn’t capable of terminating a circuit like this it needs to terminate somewhere else.  In most cases the company will purchase a router and terminate the circuit in a WIC card installed on that router.  Now, where does the inside interface of that router go?  Let’s take a look at the diagram below which has a significantly larger infrastructure shown.

image

Now we have something that resembles a full network infrastructure.  In the diagram above we have two edge devices.  One is the firewall, which terminates the internet connection, and the second is the router, which terminates the point to point circuit between the offices.  Additionally, we added a layer 3 switch which aggregates traffic between the users and the edge devices.  For those of you who don’t know, a layer 3 switch is essentially a switch that can talk and route IP.  This means that the switch functions both at the data link layer (layer 2) for switching frames, and the network layer (layer 3) for switching packets.  In addition to defining VLANs like any other layer 2 switch, a layer 3 switch can define interfaces on the VLANs.  Cisco calls these interfaces SVIs or switched virtual interfaces.  The switch sees all of the defined interfaces as directly connected routes.  What this means is that the layer 3 switch can route between any of the VLANs that have SVIs defined on them.  So in the above example let’s say we have two VLANs defined on the Layer 3 switch.

VLAN 2 – Communication VLAN – SVI : 172.10.17.254
VLAN 3 – User data VLAN – SVI: 192.168.127.254

We then tell the users to use the 192.168.127.254 IP address as their default gateway.  Additionally we configure routes in the layer 3 switch as follows.

192.168.137.0 255.255.255.0 172.10.17.2
0.0.0.0 0.0.0.0 172.10.17.1

192.168.137.0 is the remote network at the other end of the point to point link.  That being said, we tell the layer 3 switch to route traffic destined for that subnet to the routers inside interface which can then route the traffic over the link to the remote office.  The second route is a default route which tells the layer 3 switch to throw any other requests out to the internet through the ASA.

Return Routing
At this point, some people might stop thinking they have a complete network configured.  However, they’ve forgotten one crucial piece.  Return routing….   If you don’t understand what I mean by that, let’s take a quick example.  Let’s say that a user on the LAN tries to ping google.com.  The following is a list of what occurs…

-Client tries pinging google.com
-A local DNS server resolves the domain google.com to an IP
-The client machine realizes that the IP returned from the DNS lookup is off network, so it send its ping to its default gateway
-The default gateway does a route lookup, realizes it doesn’t have a specific route for the IP and sends the traffic out its default route to the ASA
-The ASA receives the traffic and allows the traffic out to the internet
-When a reply comes back, the ASA looks at the destination network and drops the packet

The only networks the ASA knows about are the ones defined on its interfaces.  In this case, the reply came back to the ASA and since the ASA didn’t have a  route to the 192.168.127.0 network it dropped the packet.  The same result would happen with traffic coming and going over the point to point circuit router.  Both edge devices need to know how to get traffic back to internal subnets that they aren’t directly connected to.  The problem is easily remedied by adding a route to each edge device.  Adding the route on the ASA would look something like this.

ASA# config t
ASA(config)# route inside 192.168.127.0 255.255.255.0 172.10.17.254

A look at the routing table now shows…

ASA# show route

Codes: C – connected, S – static, I – IGRP, R – RIP, M – mobile, B – BGP
       D – EIGRP, EX – EIGRP external, O – OSPF, IA – OSPF inter area
       N1 – OSPF NSSA external type 1, N2 – OSPF NSSA external type 2
       E1 – OSPF external type 1, E2 – OSPF external type 2, E – EGP
       i – IS-IS, L1 – IS-IS level-1, L2 – IS-IS level-2, ia – IS-IS inter area
       * – candidate default, U – per-user static route, o – ODR
       P – periodic downloaded static route

Gateway of last resort is 173.160.107.242 to network 0.0.0.0

C    172.10.17.1 255.255.255.0 is directly connected, inside
C    <Outside Network for internet connection> 255.255.255.252 is directly connected, outside
S    192.168.127.0 255.255.255.0 [1/0] via 172.10.17.254, inside
S*   0.0.0.0 0.0.0.0 [1/0] via <Outside Gateway for internet connection>, outside
ASA#

As you can see, the ASA now has a route to the 192.168.127.0 network through the SVI on the layer 3 switch.   The point of this post was to make sure that you understand that the return route is just as important as the forward route.  Hopefully you’ve also realized that the ASA isn’t always a stand alone device and often times is just a piece to a much larger networking fabric.

Tags: ,

Newer entries »