mVPN Profile 1

In mVPN profile 0, the use of PIM in the provider network is required. This is called P-PIM (Provider PIM). mVPN profile 1 removes the need for P-PIM and GRE. In profile 1, the tunneling mechanism is MPLS instead of GRE. mLDP is used to provide multicast forwarding in the MPLS data plane.

Profile 1 is sometimes refered to as “Rosen mLDP” and profile 0 is called “Rosen GRE.” Eric Rosen also contributed to the specification for mLDP. In Rosen mLDP, we still have C-PIM and we still need a way to tunnel C-PIM across the provider core. MPLS is used instead of GRE to tunnel C-PIM. A new mechanism was needed in order to achieve multicast forwarding in the MPLS data plane.

mLDP (multicast LDP) is an enhancement to LDP, it is not a separate protocol. mLDP uses regular LDP but neighbors negotiate the support of mLDP through a capabilities exchange. mLDP uses mechanisms similar to PIM, such as replication of traffic and RPF interfaces, but without PIM signaling and the state refreshing that PIM requires. mLDP uses a special FEC which includes extra information (an opaque value) to enable it to work. More on this later.

mLDP allows for two options, P2MP and MP2MP.

P2MP

In P2MP, there is a single root (the sending PE) and traffic is replicated to all other participating PEs. P2MP is used for the data MDTs.

In the above diagram, PE2 is the root, and PE1 and PE3 are the leafs. P2 preforms replication of the data.

P2MP signaling starts at the leafs. PE1 and PE3 advertise an mLDP P2MP FEC rooted at PE2 with an associated label. P1 advertises its own label out its RPF interface for PE2 towards P2. (P2 is P1’s upstream LDP neighbor for 2.2.2.2.) P2 advertises a label towards PE2 as well. This process is similar to the PIM Join process in PIM-SM, where PIM Joins are forwarded up the tree towards the root (sender). In the PIM-SM comparison, PE1 and PE3 are receivers, PE2 is the source, and P1 and P2 are PIM routers.

When PE2 sends traffic towards P2, it uses the label it learned from P2. P2 replicates this out its two interfaces with the labels learned from P1 and PE3. P1 forwards to PE1 with the label advertised by PE1.

MP2MP

In MP2MP, there is a single root which is any arbitrary router in the provider network. All participating PEs can send and receive traffic. MP2MP is used for the default MDT.

All PEs can both be a sender or a leaf. This is similar to PIM-BiDir. One router must be the root of the MP2MP. P2 is chosen here. The arbitrary root in MP2MP is similar to the role the PIM RP performs for PIM-BiDir.

All PEs begin the signaling process just as in P2MP. PE1 advertises an mLDP MP2MP FEC rooted at P2 with a special opaque value consisting of a VPN ID for the customer VRF. P1 advertises its own label towards P2, the root of the MP2MP FEC. P2, because it is the MP2MP root, advertises its own label towards PE1. This is the label that PE1 will use to send traffic to the MP2MP tree. The label PE1 advertised is the label PE1 will use to receive traffic on this tree. Each other PE repeats this process. P2 “ties” all these branches together based on the matching VPN ID in the opaque value.

Let’s examine this in the lab to fully understand how mLDP works.

Lab

We’ll re-use the lab from the previous article.

First we must remove P-PIM and the mdt configuration.

#PE1, PE2
no ip multicast-routing distributed
!
no ip pim ssm default
!
int Gi1
 no ip pim sparse-mode
!
int Lo0
 no ip pim sparse-mode
!
vrf definition CUSTOMER
 address-family ipv4
  no mdt default 232.1.1.1
  no mdt data 232.100.100.0 0.0.0.255 threshold 1
  no mdt data threshold 1

#P1, P2
no multicast-routing
no router pim

#PE3
multicast-routing
 no address-family ipv4
 vrf CUSTOMER address-family ipv4
  no mdt source lo0
  no mdt default ipv4 232.1.1.1
  no mdt data 232.100.100.0/24
!
router pim
 no address-family ipv4

As a reminder, we already have C-PIM configured on the PEs. We now need to configure mLDP. On IOS-XE, mLDP is already supported by default. On IOS-XR you must enable it. Interestingly, IOS-XR will negotiate the capability, but without the configuration below, mLDP will not work on IOS-XR.

RP/0/0/CPU0:P1#show mpls ldp neighbor 1.1.1.1 detail
Peer LDP Identifier: 1.1.1.1:0
  TCP connection: 1.1.1.1:646 - 10.10.10.10:30215

  <snip>

  Capabilities:
    Sent: 
      0x508  (MP: Point-to-Multipoint (P2MP))
      0x509  (MP: Multipoint-to-Multipoint (MP2MP))
      0x50b  (Typed Wildcard FEC)
    Received: 
      0x508  (MP: Point-to-Multipoint (P2MP))
      0x509  (MP: Multipoint-to-Multipoint (MP2MP))
      0x50b  (Typed Wildcard FEC)
  • IOS-XR will advertise P2MP and MP2MP capability by default

#P1, P2, PE3
mpls ldp 
 mldp 
  address-family ipv4
  • This is necessary to enable mLDP on IOS-XR

Next we select a mLDP root for the deafult MDT and a VPN ID. We’ll use P2 for the MP2MP root.

  • root = 20.20.20.20 (P2)

  • VPN ID = 100:1

First we’ll configure PE1 and then examine the mLDP messages that are exchanged:

#PE1
vrf definition CUSTOMER
 vpn id 100:1
 address-family ipv4 unicast
  mdt default mpls mldp 20.20.20.20

PE1 advertsies a MP2MP downstream FEC binding to its LDP neighbor on the upstream path towards P2. The opaque value contains the VPN ID which is used by the root to associate each separate “branch” with the same MP2MP tree. This is downstream because it is the label that the root should use to send traffic down the tree towards PE1 (which acts as a leaf).

P1 sees that the root is P2 and advertises its own MP2MP-downstream FEC to P2. P2 then creates a MP2MP-upstream FEC in the reverse direction and advertises it towards PE1. We can see the FEC advertised by P1 below. This FEC is upstream because it is the label that PE1 should use to forward traffic up the tree towards P2, when PE1 is a sender.

Also notice that a new interface was created on PE1:

*Sep 28 21:27:22.260: %LINK-3-UPDOWN: Interface Lspvif0, changed state to up
*Sep 28 21:27:23.261: %LINEPROTO-5-UPDOWN: Line protocol on Interface Lspvif0, changed state to up

This is an LSP (label switched path) virtual interface (vif) for the MP2MP tree and is used in C-PIM as an incoming/outgoing interface. It is similar to the tunnel interface created automatically in the Rosen GRE lab in the previous article.

We can see more information about the mLDP MDT using the following command:

PE1#show mpls mldp database 
  * For interface indicates MLDP recursive forwarding is enabled
  * For RPF-ID indicates wildcard value
  > Indicates it is a Primary MLDP MDT Branch

LSM ID : 3 (RNR LSM ID: 4)   Type: MP2MP   Uptime : 15:32:49
  FEC Root           : 20.20.20.20 
  Opaque decoded     :  
  Opaque length      : 11 bytes
  Opaque value       : 02 000B 0001000000000100000000
  RNR active LSP     : (this entry)
  Upstream client(s) :
    10.10.10.10:0    [Active]
      Expires        : Never         Path Set ID  : 3
      Out Label (U)  :          Interface    : GigabitEthernet1*
      Local Label (D):             Next Hop     : 10.1.10.10
  Replication client(s): 
>   MDT  (VRF CUSTOMER)
      Uptime         : 15:32:49      Path Set ID  : 4
      Interface      : Lspvif0       RPF-ID       : *
  • Notice that the Opaque value contains the MDT VPN-ID. IOS decodes the raw Opaque byte value.

Enable the mdt default tree using mldp on PE2:

#PE2
vrf definition CUSTOMER
 vpn id 100:1
 address-family ipv4 unicast
  mdt default mpls mldp 20.20.20.20

Enable the mdt default tree on PE3. The configuration is a bit more complex on IOS-XR. Read the comments to understand the need for each command.

vrf CUSTOMER
 vpn id 100:1
!
multicast-routing
 address-family ipv4
  ! This is needed because lo0 is the source interface for the default mdt
  int lo0 enable  
vrf CUSTOMER
  address-family ipv4
   mdt default mldp ipv4 20.20.20.20
   mdt source lo0

! The IOS-XR router will not accept RP mappings by default due to an RPF failure on the BSR address
! To fix this, we must tell the router to do an RPF lookup using the mLDP topology.
! By default it uses PIM, which explains why profile 0 works without this step.

router pim vrf CUSTOMER 
 address-family ipv4
  rpf topology route-policy USE_MLDP
!
route-policy USE_MLDP
 set core-tree mldp-default

P2 now has three branches for the MP2MP tree.

RP/0/0/CPU0:P2#show mpls mldp bindings
Wed Sep 28 21:52:16.219 UTC
mLDP MPLS Bindings database

LSP-ID: 0x00001 Paths: 3 Flags:
 0x00001 MP2MP  20.20.20.20 [mdt 100:1 0]
   Local Label: 24005 Remote: 24007 NH: 10.10.20.10 Inft: GigabitEthernet0/0/0/0
   Local Label: 24006 Remote: 24 NH: 10.2.20.2 Inft: GigabitEthernet0/0/0/1
   Local Label: 24008 Remote: 24011 NH: 10.3.20.3 Inft: GigabitEthernet0/0/0/2

The local label is the label advertised to the LDP neighbor on that interface. The remote label is the label P2 will swap to when delivering traffic out the interface. The remote label is learned from the directly connected neighbor.

Traffic received from PE2 should have an incoming label of 24006 and be replicated towards PE1 and PE3 with a label of 24007 and 24011 respectively.

What label should traffic from PE1 have as received by PE2 on the P2-PE2 link?

The answer is 24. This is the label PE2 advertised to P2.

All PEs should now see each other as neighbors in the C-PIM.

PE1#show ip pim vrf CUSTOMER neighbor 
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable,
      L - DR Load-balancing Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
100.64.0.2        GigabitEthernet2         16:46:46/00:01:32 v2    1 / DR S P G
3.3.3.3           Lspvif0                  15:09:02/00:01:28 v2    1 / DR G
2.2.2.2           Lspvif0                  15:24:19/00:01:43 v2    1 / S P G
  • The interface used for neighborship is the lspvif0 interface

On IOS-XR the virtual interface is called LmdtCUSTOMER. The format is Lmdt<VRF name> where L stands for Labeled.

RP/0/0/CPU0:PE3#show pim vrf CUSTOMER neighbor 
Thu Sep 29 13:06:25.744 UTC

PIM neighbors in VRF CUSTOMER
Flag: B - Bidir capable, P - Proxy capable, DR - Designated Router,
      E - ECMP Redirect capable
      * indicates the neighbor created for this router

Neighbor Address             Interface              Uptime    Expires  DR pri   Flags

100.64.0.9*                  GigabitEthernet0/0/0/1 17:21:20  00:01:30 1      B E
100.64.0.10                  GigabitEthernet0/0/0/1 16:52:20  00:01:28 1 (DR) P
1.1.1.1                      LmdtCUSTOMER           15:14:06  00:01:23 1      P
2.2.2.2                      LmdtCUSTOMER           15:14:09  00:01:39 1      P
3.3.3.3*                     LmdtCUSTOMER           15:18:41  00:01:31 1 (DR)

Multicast traffic should now be working.

#C1
int Gi0/0
 ip igmp join-group 239.1.2.3

#C2
C2#ping 239.1.2.3 repeat 3
Type escape sequence to abort.
Sending 3, 100-byte ICMP Echos to 239.1.2.3, timeout is 2 seconds:

Reply to request 0 from 10.1.1.10, 56 ms
Reply to request 1 from 10.1.1.10, 18 ms
Reply to request 1 from 10.1.1.10, 42 ms
Reply to request 2 from 10.1.1.10, 23 ms

Examine the mroute tables on each PE. For (10.1.2.10, 239.1.2.3), the incoming interfacing on PE1 is the lspvif0. On PE2, the outgoing interface is lspvif0. On PE3, there is no state, because it has no receivers nor senders.

PE1#show ip mroute vrf CUSTOMER | sec 10.1.2.10
(10.1.2.10, 239.1.2.3), 00:01:23/00:01:36, flags: T
  Incoming interface: Lspvif0, RPF nbr 2.2.2.2
  Outgoing interface list:
    GigabitEthernet2, Forward/Sparse, 00:01:23/00:03:07

PE2#show ip mroute vrf CUSTOMER | sec 10.1.2.10
(10.1.2.10, 239.1.2.3), 00:02:12/00:00:47, flags: T
  Incoming interface: GigabitEthernet2, RPF nbr 100.64.0.6
  Outgoing interface list:
    Lspvif0, Forward/Sparse, 00:02:12/00:03:15

RP/0/0/CPU0:PE3#show pim vrf CUSTOMER topology | in 239.1.2.3
Thu Sep 29 13:04:04.473 UTC
RP/0/0/CPU0:PE3#

We currently have the same inefficiency that we saw at this point with profile 0. PE3 is receiveing this traffic unnecessarily. We must use data MDTs to solve this.

Data MDTs

Data MDTs work in a similar way as in profile 0. The data MDT in profile 1 uses a P2MP mLDP FEC, in which the ingress PE (connected to the customer’s sender) is the root.

Configuration is similar to profile 0:

#PE1, PE2
vrf definition CUSTOMER
 address-family ipv4 unicast
  ! 100 is the number of data MDTs that will be available
  mdt data mpls mldp 100
  ! Threshold is in kbps
  mdt data threshold 1

#PE3
multicast-routing vrf CUSTOMER
 address-family ipv4
  mdt data mldp 100 threshold 1

We’ll source traffic again from C2 and watch as PE2 advertises the data MDT tree and PE1 joins it.

C2#ping 239.1.2.3 timeout 1 size 1400 repeat 100

Once traffic exceeds the threshold on PE2, the router sends an advertisement on the default MDT which includes the (S, G) entry and the P2MP opaque value it will switchover to. Wireshark does not appear to be able to decode the data in this advertisment.

PE1 has interested receivers, so it joins the P2MP tree by sending a downstream FEC out the RPF interface for the root of the tree, which is PE2.

You can see the data MDT number that PE2 chose using the following show command:

PE2#show ip pim vrf CUSTOMER mdt send

MDT-data send list for VRF: CUSTOMER
  (source, group)                     MDT-data group/num   ref_count
  (10.1.2.10, 239.1.2.3)              1                    1

PE1 has joined the data MDT:

PE1#show mpls mldp database 
  * For interface indicates MLDP recursive forwarding is enabled
  * For RPF-ID indicates wildcard value
  > Indicates it is a Primary MLDP MDT Branch

LSM ID : 5   Type: P2MP   Uptime : 00:02:10
  FEC Root           : 2.2.2.2 
  Opaque decoded     : [] 
  Opaque length      : 11 bytes
  Opaque value       : 02 000B 0001000000000100000001
  Upstream client(s) :
    10.10.10.10:0    [Active]
      Expires        : Never         Path Set ID  : 5
      Out Label (U)  : None          Interface    : GigabitEthernet1*
      Local Label (D): 25            Next Hop     : 10.1.10.10
  Replication client(s): 
    MDT  (VRF CUSTOMER)
      Uptime         : 00:02:10      Path Set ID  : None
      Interface      : Lspvif0       RPF-ID       : *

LSM ID : 3 (RNR LSM ID: 4)   Type: MP2MP   Uptime : 16:34:48
  FEC Root           : 20.20.20.20 
  Opaque decoded     : [] 
  Opaque length      : 11 bytes
  Opaque value       : 02 000B 0001000000000100000000
  RNR active LSP     : (this entry)
  Upstream client(s) :
    10.10.10.10:0    [Active]
      Expires        : Never         Path Set ID  : 3
      Out Label (U)  : 24006         Interface    : GigabitEthernet1*
      Local Label (D): 24            Next Hop     : 10.1.10.10
  Replication client(s): 
>   MDT  (VRF CUSTOMER)
      Uptime         : 16:34:48      Path Set ID  : 4
      Interface      : Lspvif0       RPF-ID       : *
  • Notice the suffix for the P2MP tree opaque value. It ends in a 1, while the default MDT uses 0. The suffix is the MDT data group number.

Conclusion

Profile 1, also called “Rosen mLDP” allows you to use MPLS in the core. This is a more elegant solution compared to profile 0, in which P-PIM and GRE was used. A provider core likely already has MPLS enabled anyways to support the L3VPN. So profile 1 simply adds the use of mLDP and C-PIM. Profile 0 also used BGP ipv4 mdt in order to auto-discover PEs participating in the same VRF. This is not necessary in profile 1, as the VPN ID information is carried inside the mLDP FEC advertisement.

We saw that mLDP is not a new protocol, but an extension to LDP. Neighbors exchange the mLDP capabilities (specifically P2MP and MP2MP). You can think of mLDP as working in a similar manner to PIM. P2MP is like PIM-SSM in which the sender is the root. MP2MP is like PIM-BiDir in which an arbitrary router is the root (or RP in PIM terms). Instead of PIM Joins, you have LDP FEC advertisements. The advertisements are sent out the RPF upstream interface just like in PIM.

These are the full steps for an operational mVPN using profile 1:

  • A working L3 VPN (prerequisite)

  • PE routers run C-PIM with the CE

  • PE and P routers run mLDP (LDP with P2MP and MP2MP support enable)

    • By default this is enabled in IOS-XE. IOS-XR requires you to enable mLDP

  • PE routers configure a default MDT under the VRF and specify the encapsulation as mpls mldp. A VPN ID is defined for the VRF.

  • PE routers automatically generate an LSP virtual interface and join the default MDT using FEC advertisements

  • PE routers then form a full mesh of C-PIM adjacencies (overlay) using the labeled VIF

  • Optionally, PE routers define a data MDT to reduce unnecessary flooding of customer traffic

For now this concludes the series on multicast and mVPN. There are many more profiles we could cover, but I believe this has already gone a little past the CCNP level.

Last updated