MPLS-TE Basics, Pt.3 (CSPF)

In the previous section we setup a plain MPLS-TE tunnel that by default took the IGP shortest path. (You could more specifically say that the path took the TE-metric based shortest path, and the TE metric is by default the IGP metric). In this section we will explore a few options for steering the tunnel along the path R1-R2-R4-XR3-XR5.

Recall the fact that we set the path-option as dynamic on the tunnel interface on R1.

R1#show run int tun1
Building configuration...

Current configuration : 155 bytes
!
interface Tunnel1
 ip unnumbered Loopback0
 tunnel mode mpls traffic-eng
 tunnel destination 5.5.5.5
 tunnel mpls traffic-eng path-option 1 dynamic

By default the TE weight of every interface is 10, because the ISIS default metric of an interface is 10 and the TE weight is the IGP metric by default.

Let’s artifically increase the TE weight for Gi3 on R2:

#R2
int Gi3
 mpls traffic-eng administrative-weight 100

On R1 we can see that R2 immediately originated a new R2.00-00 LSP to reflect the change in the TE weight:

R1#show isis database R2.00-00 verbose


IS-IS Level-1 LSP R2.00-00
LSPID                 LSP Seq Num  LSP Checksum  LSP Holdtime/Rcvd      ATT/P/OL
R2.00-00              0x0000007A   0x1476                1192/1199      0/0/0
  Area Address: 49.0001
  NLPID:        0xCC 0x8E 
  Router ID:    2.2.2.2
  Hostname: R2

<snip>

  Metric: 10         IS-Extended XR3.00
    Interface IP Address: 10.2.3.2
    Neighbor IP Address: 10.2.3.3
    Affinity: 0x00000000
    
    Physical BW: 1000000 kbits/sec
    Reservable Global Pool BW: 750000 kbits/sec
    Global Pool BW Unreserved:
      [0]:   750000 kbits/sec, [1]:   750000 kbits/sec
      [2]:   750000 kbits/sec, [3]:   750000 kbits/sec
      [4]:   750000 kbits/sec, [5]:   750000 kbits/sec
      [6]:   750000 kbits/sec, [7]:   750000 kbits/sec
    Affinity: 0x00000000
    Admin. Weight: 100
    Physical LINK BW: 1000000 kbits/sec

  <snip>

Let’s tell R1 to simply re-calculate the path for the tunnel interface. By default this happens every hour, but we can manually force this recalculation with the command below in enable mode.

R1#mpls traffic-eng reoptimize tunnel1
!
R1#show mpls traffic-eng tunnels | in Explicit      
      Explicit Route: 10.1.2.2 10.2.4.4 10.3.4.3 10.3.5.5

The path above is now R1-R2-R4-XR3-XR5. "Explicit Route" refers to the fact that this list of routers is placed in the ERO. This was calculated by CSPF as the shortest path (using TE metric) and the resulting path was placed in the ERO and signaled via RSVP.

Let’s put the TE weight on Gi3 of R2 back to the default and explicitly define the path on R1 this time.

#R2
int Gi3
 no mpls traffic-eng administrative-weight

#R1
mpls traffic-eng reoptimize tunnel1
!
show mpls traffic-eng tunnels | in Explicit
      Explicit Route: 10.1.2.2 10.2.3.3 10.3.5.5 5.5.5.5

To create our own ERO instead of letting CSPF calculate it, we create an explicit path. Let’s first define the loopback of each router we want to traverse in an explict-path. Then we will use the explicit-path as a path-option under the tunnel interface.

#R1
ip explicit-path name R1-R2-R4-XR3-XR5
 index 1 next-address 2.2.2.2
 index 2 next-address 4.4.4.4
 index 3 next-address 3.3.3.3
 index 4 next-address 5.5.5.5
!
int tun1
 tunnel mpls traffic-eng path-option 1 explicit name R1-R2-R4-XR3-XR5
 tunnel mpls traffic-eng path-option 2 dynamic

By setting option 2 as dynamic, if the first option is not valid for some reason, then the tunnel will simply dynamically calculate the path (as it is currently doing now). This prevents a node failure, such as R4, from completely breaking the tunnel. If R4 were to fail and we didn't have option 2 configured, the explicit-path would be invalid and the tunnel would be down.

We’ll reoptimize the tunnel again and verify that our manual ERO worked.

#R1
mpls traffic-eng reoptimize tunnel1
!
show mpls traffic-eng tunnels | in path
    path option 1, type explicit R1-R2-R4-XR3-XR5 (Basis for Setup, path weight 40)
!
show mpls traffic-eng tunnels | in Explicit
    Hop Limit: disabled [ignore: Explicit Path Option with all Strict Hops]
      Explicit Route: 10.1.2.2 10.2.4.4 10.3.4.3 10.3.5.5

An alternative is to define an explicit-path that excludes a link. Let’s try that.

#R1
int tun1
 no tunnel mpls traffic-eng path-option 1
end
!
mpls traffic-eng reoptimize tunnel1
!
show mpls traffic-eng tunnels | in Explicit
      Explicit Route: 10.1.2.2 10.2.3.3 10.3.5.5 5.5.5.5
!
conf t
ip explicit-path name AVOID_R2_GI3
 exclude-address 10.2.3.2
!
int tun1 
 tunnel mpls traffic-eng path-option 1 explicit name AVOID_R2_GI3
end
!
mpls traffic-eng reoptimize tunnel1
!
show mpls traffic-eng tunnels | in Explicit
      Explicit Route: 10.1.2.2 10.2.4.4 10.3.4.3 10.3.5.5

The reason this works is because the CSPF algorithm prunes the R2 Gi3 interface from the topology. Then when CSPF runs SPF, the best and only path left is R1-R2-R4-XR3-XR5. If there was another alternate path that avoided R2 Gi3 and had a better (lower) total TE weight, that path would be seen instead.

Let’s explore one final way that we can steer traffic along the R1-R2-R4-XR3-XR5 path, which is by using available bandwidth as a constraint. First we’ll reset the tunnel back to the regular dynamic path.

#R1
int tun1
 no tunnel mpls traffic-eng path-option 1
!
mpls traffic-eng reoptimize tunnel1
!
show mpls traffic-eng tunnels | in Explicit
      Explicit Route: 10.1.2.2 10.2.3.3 10.3.5.5 5.5.5.5

Right now all interfaces have the default 750M of available bandwidth, as all interfaces are 1G interfaces and by default RSVP will advertise 75% of the interface bandwidth as being available. Let’s artifically lower the available bandwidth of Gi3 on R2 and then create a high bandwidth requirement for tunnel1. CSPF should prune Gi3 on R2 from the topology as it will not have sufficient available bandwidth to support the tunnel.

#R2
int Gi3
 ip rsvp bandwidth percent 10

#R1
! Specify tun1 to require 200M of bandwidth
int tun1
 tunnel mpls traffic-eng bandwidth 200000
end
!
mpls traffic-eng reoptimize tunnel1
!
R1#show mpls traffic-eng tunnels | in Explicit
      Explicit Route: 10.1.2.2 10.2.4.4 10.3.4.3 10.3.5.5

We can see the reserved bandwidth on each interface involved in this tunnel from any router in the topology. This is because the bandwidth reservation information is flooded by the IGP at regular intervals or when there is a change in the amount of available bandwidth on an interface. For example, from R1 we can see that R2 now has 200M less available bandwidth on Gi2.

#R1
show mpls traffic-eng topology 2.2.2.2

<snip>

      link[1]: Point-to-Point, Nbr IGP Id: 0000.0000.0004.00, nbr_node_id:13, gen:21, nbr_p:80007F15A0A35B18
      frag_id: 0, Intf Address: 10.2.4.2, Nbr Intf Address: 10.2.4.4
      TE metric: 10, IGP metric: 10, attribute flags: 0x0
      SRLGs: None 
      physical_bw: 1000000 (kbps), max_reservable_bw_global: 750000 (kbps)
      max_reservable_bw_sub: 0 (kbps)

                             Global Pool       Sub Pool
           Total Allocated   Reservable        Reservable
           BW (kbps)         BW (kbps)         BW (kbps)
           ---------------   -----------       ----------
    bw[0]:            0           750000                0
    bw[1]:            0           750000                0
    bw[2]:            0           750000                0
    bw[3]:            0           750000                0
    bw[4]:            0           750000                0
    bw[5]:            0           750000                0
    bw[6]:            0           750000                0
    bw[7]:       200000           550000                0

<snip>

Priority values

Since we did not specify a bandwidth priority, tun1 defaulted to 7 for both the setup and hold priority. 7 is the worst priority, and 0 is the best. We can see this reflected in the running config for interface tunnel1. This is why 200M is only allocated on bw[7] in the output above.

#R1
show run int tun1

interface Tunnel1
 ip unnumbered Loopback0
 mpls traffic-eng tunnels
 tunnel mode mpls traffic-eng
 tunnel destination 5.5.5.5
 tunnel mpls traffic-eng priority 7 7
 tunnel mpls traffic-eng bandwidth 200000
 tunnel mpls traffic-eng path-option 2 dynamic
  • tunnel mpls traffic-eng priority 7 7 was not configured explicity. It was “auto-configured” when the bandwidth requirement was added with the command bandwidth 200000

The setup priority is the first number. A higher setup priority will preempt a tunnel with a lower hold priority. The hold priority is the second number. This priority is used by an established tunnel when a new tunnel tries to preempt it. For both priority values, a lower number is better.

The hold priority cannot be weaker (a higher number) than the setup priority, otherwise you could encounter a situation in which two tunnels continually churn and replace each other.

R1(config-if)#tunnel mpls traffic-eng priority 1 2
% Setup priority (1) may not be higher than hold priority (2)

This can be a little confusing so I’ll state this another way with an example. Let’s say a tunnel has a setup priority of 3 and hold priority of 1. It can only preempt other tunnels that have a hold priority higher than 3. Once the tunnel is established, only tunnels with a setup priority of 0 can preempt the tunnel. (Because this tunnel’s hold priority is 1).

For me, what makes this so confusing is the fact that a higher number is a lower priority. In the error message above, the setup priority was a lower number, but higher priority. The error message “may not be higher” is referring to the priority, not the number. Just try to remember that 0 is the best priority, and 7 is the worst. The hold priority cannot be weaker than the setup priority. A weaker hold priority has a numerically higher value.

Let’s go back to the lab and create a second tunnel interface that reserves 300M with a setup priority of 5 and hold priority of 5. We’ll examine the bandwidth allocations again on Gi2 of R2.

#R1
int tun2
 ip unnumbered lo0
 tunnel mode mpls traffic-eng
 tunnel destination 5.5.5.5
 tunnel mpls traffic-eng priority 5 5
 tunnel mpls traffic-eng bandwidth 300000
 tunnel mpls traffic-eng path-option 1 dynamic
 end
!

show mpls traffic-eng topology 2.2.2.2     

<snip>

      link[1]: Point-to-Point, Nbr IGP Id: 0000.0000.0004.00, nbr_node_id:13, gen:26, nbr_p:80007F15A0A1A5E0
      frag_id: 0, Intf Address: 10.2.4.2, Nbr Intf Address: 10.2.4.4
      TE metric: 10, IGP metric: 10, attribute flags: 0x0
      SRLGs: None 
      physical_bw: 1000000 (kbps), max_reservable_bw_global: 750000 (kbps)
      max_reservable_bw_sub: 0 (kbps)

                             Global Pool       Sub Pool
           Total Allocated   Reservable        Reservable
           BW (kbps)         BW (kbps)         BW (kbps)
           ---------------   -----------       ----------
    bw[0]:            0           750000                0
    bw[1]:            0           750000                0
    bw[2]:            0           750000                0
    bw[3]:            0           750000                0
    bw[4]:            0           750000                0
    bw[5]:       300000           450000                0
    bw[6]:            0           450000                0
    bw[7]:       200000           250000                0

<snip>

Notice that the bw[7] available bandwidth dropped by an additional 300M. The bandwidth allocations of tunnels with higher priority affect the available bandwidth of all priorites lower than it. You can see that priority values 0-4 still have all 750M available for tunnels.

A handy command to show current RSVP bandwidth reservations on an interface is show ip rsvp installed

R2#show ip rsvp installed
RSVP: GigabitEthernet1 has no installed reservations
RSVP: GigabitEthernet2
BPS    To              From            Protoc DPort  Sport         VRF            
200M   5.5.5.5         1.1.1.1         0      1      22     
300M   5.5.5.5         1.1.1.1         0      2      1      
RSVP: GigabitEthernet3 has no installed reservations

Now that we have our tunnel setup, is traffic from R1 to XR5 automatically using this tunnel?

#R1
show mpls traffic-eng tunnels tunnel 1 | sec RSVP_Path_Info
    RSVP Path Info:
      My Address: 10.1.2.1   
      Explicit Route: 10.1.2.2 10.2.4.4 10.3.4.3 10.3.5.5 
                      5.5.5.5 
      Record   Route:   NONE
      Tspec: ave rate=200000 kbits, burst=1000 bytes, peak rate=200000 kbits
!
traceroute 5.5.5.5 source 1.1.1.1 probe 1
Type escape sequence to abort.
Tracing the route to 5.5.5.5
VRF info: (vrf in name/id, vrf out name/id)
  1 10.1.2.2 2 msec
  2 10.2.3.3 4 msec
  3 10.3.5.5 6 msec

Traffic still follows the IGP best path. We still need to do one last thing, which is to steer traffic into the tunnel. We’ll cover this in part 4.

Last updated