PIM-DM (Dense Mode)

PIM (Protocol Independent Multicast) is the multicast protocol used in today’s networks. PIM is used to build multicast distribution trees. “Protocol independence” refers to the fact that any unicast routing protocol can build the unicast routing table which is used by PIM for best path determination. PIM does not care if the unicast routing table is built using static, connected, OSPF, BGP routes, etc. PIM simply consults the unicast routing table when preforming RPF checks.

PIM is a router-to-router protocol. Similar to IGP protocols, neighbor discovery happens dynamically using a link-local multicast address (224.0.0.13). Because PIM relies on the unicast routing table, PIM does not exchange routes with neighbors. Instead it uses Join and Prune messages to build the distribution tree for a given (*, G) or (S, G) multicast entry. A distribution tree is built when each router has an incoming (upstream) interface and outgoing (downstream) interface(s) for a given (S, G) or (*, G) entry.

The basic premise behind PIM is that routers need to tell other routers that they have hosts interested in particular multicast groups. A router, upon receiving an IGMP Join from a host, in turn sends a PIM Join to its directly connected neighbors. An IGMP Join is a host saying “I’m interested in this multicast group address,” and a PIM Join is a router saying “I have hosts downstream of me interested in this multicast group address.” This is how PIM-SM (Sparse Mode) works.

In PIM-DM this process is actually a little backwards. A source will begin sending multicast traffic, and the PIM routers in the network will begin flooding the multicast traffic out all other interfaces connecting to other PIM neighbors or directly connected hosts. This is essentially a broadcast that is not bound by subnet boundaries. If a PIM-DM router does not have any receivers downstream, or any directly connected neighbors, the router will send a PIM Prune saying “stop sending this traffic to me, I have no interested receivers.” The pruning process will eventually result in a multicast distribution tree consisting of only the interested receivers.

The process that PIM-DM uses is called “flood and prune.” Traffic is first flooded everywhere, and then pruned back where it is not needed. This wastes resources and bandwidth in the network, and should only be used in the case that receivers densly populate the network. This means that network overall contains more interested receivers than disinterested receivers.

PIM-DM is considered legacy these days, as flood and prune is just too inefficient. However, it is the most simple PIM variant and is therefore useful when learning how PIM builds distribution trees. PIM-DM uses source based trees (also called SPT - shortest path tree).

Lab

We will use the following topology to explore PIM-DM.

The startup configs simply contain IP addresses and OSPF:

#Source
hostname Source
!
no ip domain lookup
!
int Gi1
 ip address 10.10.10.10 255.255.255.0
 no shut
!
ip route 0.0.0.0 0.0.0.0 10.10.10.1

#R1
hostname R1
!
int Gi1
 ip address 10.10.10.1 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Gi2
 ip address 10.1.2.1 255.255.255.0 
 ip ospf network point-to-point
 no shut
!
int Gi3
 ip address 10.1.5.1 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Lo0
 ip address 1.1.1.1 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0
 passive-interface Gi1

#R2
hostname R2
!
int Gi1
 ip address 10.1.2.2 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Gi2
 ip address 10.2.3.2 255.255.255.0 
 ip ospf network point-to-point
 no shut
!
int Gi3
 ip address 10.2.4.2 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Lo0
 ip address 2.2.2.2 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0

#R3
hostname R3
!
int Gi1
 ip address 10.2.3.3 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Gi2
 ip address 10.10.0.1 255.255.255.0 
 no shut
!
int Lo0
 ip address 3.3.3.3 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0
 passive-interface Gi2

#R4
hostname R4
!
int Gi1
 ip address 10.2.4.4 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Gi2
 ip address 10.10.100.1 255.255.255.0 
 no shut
!
int Lo0
 ip address 4.4.4.4 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0
 passive-interface Gi2

#R5
hostname R5
!
int Gi1
 ip address 10.1.5.5 255.255.255.0
 ip ospf network point-to-point
 no shut
!
int Gi2
 ip address 10.10.200.1 255.255.255.0 
 no shut
!
int Lo0
 ip address 5.5.5.5 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0
 passive-interface Gi2

#Hosts 
hostname Host#
!
int Gi0/0
 ip address <ip address> 255.255.255.0
 no shut
!
ip route 0.0.0.0 0.0.0.0 <router address, .1>

To begin, we will enable PIM-DM on R4’s Gi2 interface and configure Host2 and Host3 to join the group 239.1.1.1. You should recognize this from the previous article on IGMP. Confirm that R4 sees that it has listeners for 239.1.1.1 off Gi2.

#R4
ip multicast-routing distributed
!
int Gi2
 ip pim dense-mode

#Host2,3
int Gi0/0
 ip igmp join-group 239.1.1.1


R4#show ip igmp groups 
IGMP Connected Group Membership
Group Address    Interface                Uptime    Expires   Last Reporter   Group Accounted
239.1.1.1        GigabitEthernet2         00:00:14  00:02:50  10.10.100.20

So far we only have the LHR and host configuration done. Multicast is not enabled in the core network yet. Next we will enable PIM-DM everywhere and bring up PIM neighborships. Before we do that, let’s look at the multicast routing table on R4. The multicast routing table contains (*, G) and (S, G) entries with their associated Incoming interface (upstream interface), and Outgoing interface list (downstream interfaces). We will examine this multicast routing table frequently throughout this article.

R4#show ip mroute
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report, 
       Z - Multicast Tunnel, z - MDT-data group sender, 
       Y - Joined MDT-data group, y - Sending to MDT-data group, 
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute, 
       N - Received BGP Shared-Tree Prune, n - BGP C-Mroute suppressed, 
       Q - Received BGP S-A Route, q - Sent BGP S-A Route, 
       V - RD & Vector, v - Vector, p - PIM Joins on route, 
       x - VxLAN group, c - PFP-SA cache created entry, 
       * - determined by Assert, # - iif-starg configured on rpf intf, 
       e - encap-helper tunnel flag
Outgoing interface flags: H - Hardware switched, A - Assert winner, p - PIM Join
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.1.1.1), 00:00:33/00:02:31, RP 0.0.0.0, flags: DC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet2, Forward/Dense, 00:00:33/stopped

Here we can see that the router has created a (*, G) entry for 239.1.1.1 because it has directly connected hosts that have joined the group via IGMP. For Dense mode, this (*, G) entry always has an incoming interface of Null. This is because the (S, G) entries will actually be used for forwarding. We will see an (S, G) entry once the source begins sending traffic. For now, notice that the OIL contains Gi2. It has been active for 33 secs. The flags on the (*, G) entry indicate that it is running in dense mode (D), and that there are directly connected listeners (C).

PIM Neighborship

Now we will bring up PIM everywhere in our network. Every interface, whether it faces a PIM neighbor, directly connected receivers, or sources, must have PIM enabled. As we saw in the last article, PIM enables IGMPv2 which is necessary for the interface facing the receivers. PIM also enables the RPF check which is needed for the interface facing a source. Lastly, PIM is needed on a router-to-router interface to enable PIM neighborships which is required to build the tree.

On every router run the following commands:

ip multicast-routing distributed

! Enable PIM-DM on each interface
int Gi*
 ip pim dense-mode

Looking at a pcap between R2 and R4, we can see how the PIM neighborship process works.

First, notice that neighborships are formed and maintained simply with the Hello message. This is sent in 30 second intervals, with a dead interval of 3.5x Hello by default (105 seconds).

The PIM Hello message is quite simple. There is no indication in a Hello whether a neighbor is “seen” on the other end or not. It is simply the receipt of Hellos from a neighbor that keeps the neighborship alive. Because of this, if there is a layer 2 issue between two routers, it is possible for one router to believe the neighbor is "UP" and the other router to not see the neighbor at all. (In OSPF, for example, this would result in the neighborship being stuck in "INIT", but there is no two-way check in PIM).

We can examine PIM neighbors on R2 using the following show command:

R2#show ip pim neighbor 
PIM Neighbor Table
Mode: B - Bidir Capable, DR - Designated Router, N - Default DR Priority,
      P - Proxy Capable, S - State Refresh Capable, G - GenID Capable,
      L - DR Load-balancing Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
10.1.2.1          GigabitEthernet1         00:06:08/00:01:30 v2    1 / S P G
10.2.3.3          GigabitEthernet2         00:06:21/00:01:16 v2    1 / DR S P G
10.2.4.4          GigabitEthernet3         00:04:10/00:01:29 v2    1 / DR S P G

Dense mode flood and prune behavior

The present configuration is all that is needed to enable a fully functional PIM-DM network. Let’s send an ping from the Source to 239.1.1.1 and examine what happens on the key routers in the network.

Source#ping 239.1.1.1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.1.1.1, timeout is 2 seconds:

Reply to request 0 from 10.10.100.20, 256 ms
Reply to request 0 from 10.10.100.10, 256 ms

We see above that the very first packet succeeds and we get a reply from Host2 and Host3.

On R1, upon receiving the very first packet from the Source destined for 239.1.1.1, creates an (*, G) and (S, G) entry. In PIM-DM the (*, G) entry is not actually used for forwarding, which you can tell by the fact that the incoming interface is null. The (*, G) entry is an optimization which contains all outgoing interfaces. A (S, G) entry is always created by using the (*, G) entry as a “template.” You will come to understand this a little better with PIM-SM, so for now you can pretty much ignore the (*, G) entries.

R1#show ip mroute | begin 239
(*, 239.1.1.1), 00:01:04/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet3, Forward/Dense, 00:01:04/stopped
    GigabitEthernet2, Forward/Dense, 00:01:04/stopped

(10.10.10.10, 239.1.1.1), 00:01:04/00:01:55, flags: T
  Incoming interface: GigabitEthernet1, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet2, Forward/Dense, 00:01:04/stopped
    GigabitEthernet3, Prune/Dense, 00:01:04/00:01:55

When R1 received the first packet, it also preformed an RPF check. It used the unicast RIB to find that the outgoing interface for the Source (10.10.10.10) is indeed Gi1. Next, R1 added all other interfaces which have a PIM-DM neighbor or directly connected receivers to the OIL. This includes Gi2 and Gi3, because both interfaces have neighbors. There are no receivers directly connected to R1.

Notice that Gi3 is in Prune status. R5, upon receiving the packet from R1, created an entry for (*, 239.1.1.1). R5 found that the source IP (10.10.10.10) passed the RPF check. R5 then found that it has no other PIM neighbors or directly connected receivers. So R5 sent a PIM Prune message to R1. When a PIM-DM router has no entries in its OIL, it sends a PIM Prune to stop the flow of unneeded multicast traffic.

R5#show ip mroute | beg 239
(*, 239.1.1.1), 00:08:19/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1, Forward/Dense, 00:08:19/stopped

(10.10.10.10, 239.1.1.1), 00:02:44/00:00:15, flags: T
  Incoming interface: GigabitEthernet1, RPF nbr 10.1.5.1
  Outgoing interface list: Null
  • The flag P indicates that the router has sent a PIM Prune

The Prune that R5 sends to R1 is destined for the all PIM routers multicast address (224.0.0.13). The contents of the PIM message show that R5 is pruning the source address of 10.10.10.10./32 for the group address 239.1.1.1/32. Notice that R5 had to receive the unneccessary traffic in order to Prune itself from the multicast distribution tree, which is quite inefficient. The PIM-DM process involves a series of Prunes, which start from the edge of the network and work their way inwards until there are no longer any interfaces which need to be pruned.

Let’s examine the process that happened on R3.

R1 sent the traffic out Gi2, as it has a PIM neighbor on this interface. R2 verified that the source, 10.10.10.10 passes the RPF check with on the received interface, Gi1. R2 creates (*, G) and (S, G) entries. R2 then forwards the packet out its other interfaces which have PIM neighbors or directly connected sources. This is Gi2 and Gi3.

R2#show ip mroute | beg 239
(*, 239.1.1.1), 00:05:40/stopped, RP 0.0.0.0, flags: D
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet3, Forward/Dense, 00:05:40/stopped
    GigabitEthernet2, Forward/Dense, 00:05:40/stopped
    GigabitEthernet1, Forward/Dense, 00:05:40/stopped

(10.10.10.10, 239.1.1.1), 00:00:05/00:02:54, flags: T
  Incoming interface: GigabitEthernet1, RPF nbr 10.1.2.1
  Outgoing interface list:
    GigabitEthernet2, Prune/Dense, 00:00:05/00:02:54
    GigabitEthernet3, Forward/Dense, 00:00:05/stopped

R3 is at the edge of the network. It has no other PIM neighbors besides R2. R3 finds that the source 10.10.10.10 passes the RPF check. However, R3 has no directly connected receivers or other PIM neighbors, so it sends a PIM Prune just like R5 did. This results in R2 pruning Gi2 from the (10.10.10.10, 239.1.1.1) (S, G) entry.

R4 has no other directly connected neighbors, but it does have directly connected receivers. R4 adds the (S, G) entry with Gi2 in the OIL. R4 has not sent a Prune, so R2 continues forwarding out its Gi3 interface.

R4#show ip mroute | beg 239
(*, 239.1.1.1), 00:36:18/stopped, RP 0.0.0.0, flags: DC
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    GigabitEthernet1, Forward/Dense, 00:23:11/stopped
    GigabitEthernet2, Forward/Dense, 00:36:18/stopped

(10.10.10.10, 239.1.1.1), 00:01:53/00:01:06, flags: T
  Incoming interface: GigabitEthernet1, RPF nbr 10.2.4.2
  Outgoing interface list:
    GigabitEthernet2, Forward/Dense, 00:01:53/stopped

Prune state refresh

In PIM-DM, Forward states never expire (hence the keyword stopped). However, Pruned states have a 3 minute expiration. If the timer expires, the interface will move back to Forward.

Examine R2’s (S, G) entry:

(10.10.10.10, 239.1.1.1), 00:06:05/00:01:36, flags: T
  Incoming interface: GigabitEthernet1, RPF nbr 10.1.2.1
  Outgoing interface list:
    GigabitEthernet2, Prune/Dense, 00:00:12/00:02:47
    GigabitEthernet3, Forward/Dense, 00:06:05/stopped

Gi2 was Pruned 12 seconds ago, and will move back to Forward in 2 mins and 47 secs. This means that every 3 minutes, Gi2 will move back to Forwarding, the traffic will be forwarded to R3, and R3 will have to Prune the traffic again.

To simulate a stream of multicast traffic, I ran a repeated ping on the Source. Notice in the pcap that every three minutes R3 receives the traffic again and must repeatedly prune itself from the tree.

PIM-DM Grafting

If a new receiver is discovered (via IGMP) on a router that just recently pruned itself from the dense mode multicast distribution tree, without a grafting mechanism the receiver would have to wait up to 3 minutes for traffic to begin flowing. PIM-DM allows for a router to graft itself back onto the distribution tree. This circumvents the 3 minute Prune expiration timer, and instructs the upstream PIM neighbor to immediately begin forwarding again.

I will start the stream of traffic from the Source again, and then have Host1 join the 239.1.1.1 group on its Gi0/0 interface.

R3 prunes itself from the tree for (10.10.10.10, 239.1.1.1) at 1.6 seconds. Host1 joins the 239.1.1.1 group at 14 seconds. If R3 could not graft itself onto the tree, Host1 would have to wait 2 minutes and 46 seconds to start receiving traffic. Instead, R3 sends a PIM Graft message unicast to R2. R2 replies with a Graft-Ack. The Graft message essentially just contains a Pim Join for (10.10.10.10, 239.1.1.1).

Conclusion

PIM-DM is an inefficent PIM mode. In PIM-DM, traffic is flooded everywhere on the network, essentially being treated as a broadcast. Every three minutes, routers that do not want the traffic must continually prune themselves from the distribution tree.

We will come to see in the next few articles that PIM-SM makes a lot more sense. In PIM-SM, a router must request to join a distribution tree. Traffic only flows down this tree. In PIM-SM, routers that are not interested in the traffic will never see it in the first place. For these reasons, PIM-DM is almost never used in networks today.

Last updated