E-Line

E-Line is a simulated point-to-point connection, also called an xconnect, or psuedowire.

This is a L2VPN service which encapsulates all frames received at one end, and unconditionally transports them to the other end. No routing protocol runs between the CE and PE as in L3VPN. The customer equipment on each end believes that they are connected back-to-back, as if by an actual wire, hence the name “pseudowire.”

In MEF terms, you may see this called an EPL (Ethernet Private Line). This refers to the fact that the payload is Ethernet, and the p2p service simulates a private line connecting the equipment on each end.

In MPLS terms this is also called AToM (Any Transport over MPLS), because supported L2 traffic types can be encapsulated and transported over an MPLS core. However this term refers to being able to transport L2 payloads such as PPP and frame relay, not just Ethernet. You may see AToM for Ethernet called EoMPLS (Ethernet over MPLS).

Just like L3VPN, when doing an xconnect, the traffic has two MPLS labels. The top label, sometimes called the transport label, represents the egress PE. P routers switch on this top label. The bottom label, sometimes called the service label, represents the xconnect, and is called the VC (virtual circuit) label. The egress PE associates traffic received with this label to a particular AC (attachment circuit). The AC is just the interface connecting to the CE device.

In order for the two PEs to learn each others label for this xconnect, they run a targeted LDP session. (In L3VPN the PEs learn the equivalent service label via MP-BGP, specifically vpnv4/v6 unicast).

When considering MTU, you will have an additional 14 bytes of overhead compared to L3VPNs, plus an optional extra 4 bytes on top of this. This is because you are encapsulating the layer two header of the customer’s Ethernet frame (14 bytes = 6 byte destination MAC + 6 byte source MAC + 2 byte Ethertype), plus potentially the customer’s VLAN header (4 bytes).

The MTU on each AC (attachment circuit), or CE-facing interface, must match on both sides in order for the xconnect to come up. This MTU value is signaled in the targeted LDP session. If the value does not match, the routers will report the xconnect as down.

Configuration

There are two ways to configure the xconnect on IOS-XE.

1. You can configure the xconnect in a single line under the service instance.

In this example, the egress PE is 1.1.1.1, and the pw-id (pseudowire-id) or VC ID is 2143. This value must match on the remote end. The pw-id is not the MPLS label - they are two different things. You can think of the pw-id sort of as a route-target in L3VPN. The pw-id differentiates multiple xconnects between the same two PEs.

int Gi1
service instance 10 ethernet
 xconnect 1.1.1.1 2143 encapsulation mpls

2. The other way to configure an xconnect is using the l2vpn xconnect context.

I believe this is the “newer-style” configuration.

l2vpn xconnect context MY_EPL
 member int Gi1 service-instance 10
 member 1.1.1.1 2143 encapsulation mpls

Functionally these achieve the same thing. However in the case of pseudowire stitching, you can only use the second method, because there is no AC. Pseudowire stitching is used to stitch two separate psuedowires together as one. This is also called MS-PW (multi-segment pseudowire).

Here R2 stitches together two psuedowires. The CEs would connect to R1 Gi1 and R3 Gi1.

R1#
int gi1
service instance 1 ethernet
 encapsulation dot1q 1
 xconnect 2.2.2.2 100 encapsulation mpls

R2#
l2vpn xconnect context PW_TEST
 member 1.1.1.1 100 encapsulation mpls
 member 3.3.3.3 100 encapsulation mpls

R3#
int gi1
service instance 1 ethernet
 encapsulation dot1q 1
 xconnect 2.2.2.2 100 encapsulation mpls

Verification

!This does not show MTU details but is good for quick verification
show xconnect int Gi1 detail 

!This shows much more detail, including configured MTUs. This is good for troubleshooting.
show mpls l2transport vc XXXX detail

Lab

A customer has ordered an EPL service to connect CE1 and CE2. The CEs will run OSPF and should achieve a full adjacency with each other.

The provider network (PE1 - P2 - PE3) runs LDP and OSPF. PE1’s loopback is 1.1.1.1, P2’s is 2.2.2.2 and PE3’s is 3.3.3.3.

This is PE1’s MPLS forwarding table (LFIB) before the EPL service is enabled:

PE1#show mpls forwarding-table 
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
16         Pop Label  2.2.2.2/32       0             Gi2        10.1.2.2    
17         Pop Label  10.2.3.0/24      0             Gi2        10.1.2.2    
18         17         3.3.3.3/32       0             Gi2        10.1.2.2

This is PE3’s LFIB:

PE3#show mpls forwarding-table 
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
16         16         1.1.1.1/32       0             Gi2        10.2.3.2    
17         Pop Label  2.2.2.2/32       0             Gi2        10.2.3.2    
18         Pop Label  10.1.2.0/24      0             Gi2        10.2.3.2

Now we enable the EPL service:

PE1#
interface GigabitEthernet1
 service instance 1 ethernet
  encapsulation default
  xconnect 3.3.3.3 100 encapsulation mpls

PE3#
interface GigabitEthernet1
 service instance 1 ethernet
  encapsulation default
  xconnect 1.1.1.1 100 encapsulation mpls

On each PE we see the targeted LDP neighborship come up automatically. Because PE1 and PE3 are not directly connected, they did not have an existing LDP neighborship, hence targeted LDP is required.

PE1#
*Jun 26 15:32:03.703: %LDP-5-NBRCHG: LDP Neighbor 3.3.3.3:0 (2) is UP

PE3#
*Jun 26 15:32:03.670: %LDP-5-NBRCHG: LDP Neighbor 1.1.1.1:0 (2) is UP

Here are the LFIB tables now. Coincidentally both PEs chose label 19 to represent this VC, due to the simple nature of the lab. In the real world this would usually not be the same value. This is a dynamically allocated MPLS label.

PE1#show mpls forwarding-table 
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
16         Pop Label  2.2.2.2/32       0             Gi2        10.1.2.2    
17         Pop Label  10.2.3.0/24      0             Gi2        10.1.2.2    
18         17         3.3.3.3/32       0             Gi2        10.1.2.2    
19         No Label   l2ckt(1)         0             Gi1        point2point

PE3#show mpls forwarding-table 
Local      Outgoing   Prefix           Bytes Label   Outgoing   Next Hop    
Label      Label      or Tunnel Id     Switched      interface              
16         16         1.1.1.1/32       0             Gi2        10.2.3.2    
17         Pop Label  2.2.2.2/32       0             Gi2        10.2.3.2    
18         Pop Label  10.1.2.0/24      0             Gi2        10.2.3.2    
19         No Label   l2ckt(1)         0             Gi1        point2point

Packet capture during the xconnect setup

First the PEs establish a targeted LDP session:

They exchange mapping messages for all IPv4 prefixes (this is normal LDP downstream unsolicited mode) and a mapping message for the xconnect.

This is PE1’s advertisement for the xconnect:

This is PE3’s advertisement:

0x0013 in hex is 19 in decimal, which matches the label we saw in the LFIB of both PEs earlier.

You can also see that the interface MTU matches in the pcap. Both sides are 1500, so the xconnect will come up.

Let’s verify this:

PE1#show xconnect interface Gi1 detail
Legend:    XC ST=Xconnect State  S1=Segment1 State  S2=Segment2 State
  UP=Up       DN=Down            AD=Admin Down      IA=Inactive
  SB=Standby  HS=Hot Standby     RV=Recovering      NH=No Hardware

XC ST  Segment 1                         S1 Segment 2                         S2
------+---------------------------------+--+---------------------------------+--
UP pri   ac Gi1:1(Ethernet)              UP mpls 3.3.3.3:100                  UP
            Interworking: none                   Local VC label 19              
                                                 Remote VC label 19

Notice that the left side under “Segment 1” shows that the AC (attachment circuit) is Gi1:1, which means Gi1 service instance 1.

Let’s see what happens if we shutdown Gi1 on PE3, which is the AC which connects to CE2.

PE3#
int Gi1
 shutdown

PE1#show xconnect interface Gi1 detail
Legend:    XC ST=Xconnect State  S1=Segment1 State  S2=Segment2 State
  UP=Up       DN=Down            AD=Admin Down      IA=Inactive
  SB=Standby  HS=Hot Standby     RV=Recovering      NH=No Hardware

XC ST  Segment 1                         S1 Segment 2                         S2
------+---------------------------------+--+---------------------------------+--
DN pri   ac Gi1:1(Ethernet)              UP mpls 3.3.3.3:100                  DN
            Interworking: none                   Local VC label 19              
                                                 Remote VC label unassigned

From PE1’s perspective, we can now see that the segment is still UP on our side, but DN (down) on the remote side. We also see that the remote side does not have a label assigned any longer.

How did PE1 learn that PE3’s AC went down? PE3 signaled this through LDP with a Label Withdraw message. Here’s the relevant pcap:

Remember that we said that the MTU has to match on both sides? What happens if we change Gi1 on PE3 to a different MTU? Let’s bring int Gi1 back up, and then change MTU and see what happens.

When we change the MTU, the interface bounces, and as a consequence the LDP neighborship with PE1 bounces:

PE3(config)#int Gi1
PE3(config-if)#mtu 1600 
PE3(config-if)#
*Jun 26 15:58:07.681: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1, changed state to down
PE3(config-if)#
*Jun 26 15:58:07.740: %LDP-5-NBRCHG: LDP Neighbor 1.1.1.1:0 (2) is DOWN (AToM disabled targeted session)
*Jun 26 15:58:07.782: %LDP-5-NBRCHG: LDP Neighbor 1.1.1.1:0 (3) is UP
PE3(config-if)#
*Jun 26 15:58:18.465: %LINK-3-UPDOWN: Interface GigabitEthernet1, changed state to up
*Jun 26 15:58:19.466: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet1, changed state to up

Let’s look at PE1 now:

PE1#show xconnect interface Gi1 detail
Legend:    XC ST=Xconnect State  S1=Segment1 State  S2=Segment2 State
  UP=Up       DN=Down            AD=Admin Down      IA=Inactive
  SB=Standby  HS=Hot Standby     RV=Recovering      NH=No Hardware

XC ST  Segment 1                         S1 Segment 2                         S2
------+---------------------------------+--+---------------------------------+--
DN pri   ac Gi1:1(Ethernet)              UP mpls 3.3.3.3:100                  DN
            Interworking: none                   Local VC label 19              
                                                 Remote VC label 21

The state is DN on the remote end, but we still have a remote label. This output above isn’t enough to see why the xconnect is down. Let’s look at something more verbose.

PE1#show mpls l2transport vc 100 detail 
Local interface: Gi1 up, line protocol up, Ethernet:1 up
  Destination address: 3.3.3.3, VC ID: 100, VC status: down
    Last error: Pseudowire MTU mismatch with peer
    Output interface: none, imposed label stack {}
    Preferred path: not configured  
    Default path: no route
    No adjacency
  Create time: 00:29:50, last status change time: 00:02:20
    Last label FSM state change time: 00:02:20
  Signaling protocol: LDP, peer 3.3.3.3:0 up
    Targeted Hello: 1.1.1.1(LDP Id) -> 3.3.3.3, LDP is UP
    Graceful restart: not configured and not enabled
    Non stop routing: not configured and not enabled
    Status TLV support (local/remote)   : enabled/supported
      LDP route watch                   : enabled
      Label/status state machine        : remote invalid, LruRnd
      Last local dataplane   status rcvd: No fault
      Last BFD dataplane     status rcvd: Not sent
      Last BFD peer monitor  status rcvd: No fault
      Last local AC  circuit status rcvd: No fault
      Last local AC  circuit status sent: DOWN PW(rx/tx faults)
      Last local PW i/f circ status rcvd: No fault
      Last local LDP TLV     status sent: No fault
      Last remote LDP TLV    status rcvd: No fault
      Last remote LDP ADJ    status rcvd: No fault
    MPLS VC labels: local 19, remote 21 
    Group ID: local 6, remote 6
    MTU: local 1500, remote 1600
    Remote interface description: 
  Sequencing: receive disabled, send disabled
  Control Word: On (configured: autosense)
  SSO Descriptor: 3.3.3.3/100, local label: 19
  Dataplane:
    SSM segment/switch IDs: 0/4096 (used), PWID: 1
  VC statistics:
    transit packet totals: receive 0, send 0
    transit byte totals:   receive 0, send 0
    transit packet drops:  receive 0, seq error 0, send 0

Now we can clearly see the reason that the xconnect is down. “Last error: Pseudowire MTU mismatch with peer”

Let’s put Gi1 on PE3 back to MTU 1500, and do some testing with our CEs. Here’s the relevant config for the CEs:

CE1#
interface GigabitEthernet0/0
 ip address 10.1.1.1 255.255.255.252
 ip ospf network point-to-point
!
interface Loopback0
 ip address 1.1.1.1 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0

CE2#
interface GigabitEthernet0/0
 ip address 10.1.1.2 255.255.255.252
 ip ospf network point-to-point
!
interface Loopback0
 ip address 2.2.2.2 255.255.255.255
!
router ospf 1
 network 0.0.0.0 255.255.255.255 area 0

CE1 can now ping CE2’s loopback.

CE1#ping 2.2.2.2 source 1.1.1.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 5/6/8 ms

CE1#traceroute 2.2.2.2 source 1.1.1.1
Type escape sequence to abort.
Tracing the route to 2.2.2.2
VRF info: (vrf in name/id, vrf out name/id)
  1 10.1.1.2 4 msec *  6 msec

Reminder: These IPs of 1.1.1.1 and 2.2.2.2 have nothing to do with the loopback IPs on the service provider routers PE1 and P2, which happen to be the same. These are completely separate. This is an L2VPN service in which the frames are transported from CE1 directly to CE2.

The service provider network is completely hidden as you can see from the traceroute. CE1 has no idea that there are routers in the middle of this connection. It believes that CE2 is directly connected.

MTU Overhead

Now let’s take a look at the MTU implications of this service. All interfaces are at the default 1500 MTU in this lab. Remember that we are now encapsulating the 14 byte ethernet header. We also have MPLS label overhead. The CEs are not using VLANs, so we have no VLAN overhead.

Can you figure out the max packet size we can send with ping 2.2.2.2 source 1.1.1.1 size #### df-bit ?

If you said 1478, you were very close. This what I’d expect you to say, because it would be 1500 - 14 (ethernet header) - 4 (MPLS top label) - 4 (MPLS bottom label) = 1478.

However, it is actually 1474, because of the additional 4 byte control word used on the xconect. We haven’t talked about this yet, so I will explain it now.

CE1#ping 2.2.2.2 source 1.1.1.1 size 1474 df-bit 
Type escape sequence to abort.
Sending 5, 1474-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
Packet sent with the DF bit set
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 3/4/5 ms

CE1#ping 2.2.2.2 source 1.1.1.1 size 1475 df-bit 
Type escape sequence to abort.
Sending 5, 1475-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
Packet sent with the DF bit set
.....
Success rate is 0 percent (0/5)

Control Word

Do you remember seeing this line in the above pcaps of the LDP mapping advertisement?

The control word has multiple purposes.

  1. Pad small packets

  2. Carry layer 2 control information for payload types such as frame relay and ATM. PPP, HDLC, and Ethernet do not require a control word. If you’d like, you can turn of the control word altogether when using these layer 2 payload types.

  3. Preserve sequencing of transported frames

  4. Aid in load balancing of frames across the MPLS network

  5. Faciliate fragmentation and reassembly

The control word has a sequence number field. However, out of sequence frames can’t be re-ordered. If out of sequence frames are detected, they are just dropped. By default, when using the control word, sequencing is disabled.

The load balancing feature is useful, because it corrects potential issues with load balancing. If LSRs in the middle look at the first nibble behind the label stack when doing hasing for ECMP, they are actually looking at the ethernet frame itself when the payload is an xconnect. If the customer’s frame happens to start with a 4, the router assumes it is an IPv4 packet, but this is an incorrect assumption. It may just happen to be an ethernet frame that started with a 4, but is not an IPv4 packet. The control word is inserted between the label stack and the payload, so that when an LSR looks at the first nibble behind the label stack, it is a consistent value.

While fragmentation is supported when using the control word, you probably don’t want to rely on this and take the preformance hit. Instead I would recommend ensuring the MTU in your core is large enough to handle the customer’s traffic without fragmentation.

Fixing MTU in our lab

So our customer has noticed that they cannot send full 1500 byte packets through their EPL service. Let’s correct this by changing the MTU values in the provider network. Let’s turn off the control word, and use the minimum required MTU to allow the CEs to ping each other at 1500 bytes.

Pop quiz: What is the minimum MTU value we should use to allow 1500 bytes with the control word off?

The answer is 1522. This allows for the 14 byte ethernet header plus two MPLS labels at 4 bytes each. The customer cannot use VLAN tags in this case, but that is OK right now. We know the customer is not using VLANs.

First let’s turn off the control word. We need to define a pseudowire-class that we then apply to the xconnect.

PE1#
pseudowire-class CW_DISABLE
 encapsulation mpls
 no control-word
!
int Gi1
service instance 1 ethernet
 xconnect 3.3.3.3 100 encapsulation mpls pw-class CW_DISABLE

Just by configuring it on one side, we have turned the use of the control word off, since this is a negotiated feature.

PE1#show mpls l2transport vc 100 det | in Control
  Control Word: Off

PE3#show mpls l2transport vc 100 det | in Control
  Control Word: Off (configured: autosense)

Now we can ping with a size of 1478 as we were expecting initially. (1500 - 14 - 4 - 4)

CE1#ping 2.2.2.2 source 1.1.1.1 size 1478 df-bit
Type escape sequence to abort.
Sending 5, 1478-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
Packet sent with the DF bit set
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 4/5/6 ms

Let’s set all core interfaces to an MTU of 1522.

PE1(config)#int gi2
PE1(config-if)#mtu 1522

P2(config)#int range gi1-2
P2(config-if-range)#mtu 1522

PE3(config)#int gi2
PE3(config-if)#mtu 1522

The CEs can now ping at exactly 1500 bytes.

CE1#ping 2.2.2.2 source 1.1.1.1 size 1500 df-bit
Type escape sequence to abort.
Sending 5, 1500-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
Packet sent with the DF bit set
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 3/4/6 ms

CE1#ping 2.2.2.2 source 1.1.1.1 size 1501 df-bit
Type escape sequence to abort.
Sending 5, 1501-byte ICMP Echos to 2.2.2.2, timeout is 2 seconds:
Packet sent with a source address of 1.1.1.1 
Packet sent with the DF bit set
.....
Success rate is 0 percent (0/5)

Further Reading

https://www.cisco.com/c/en/us/td/docs/routers/asr920/configuration/guide/mpls/17-1-1/b-mp-l2-vpns-xe-17-1-asr920/b-mp-l2-vpns-xe-17-1-asr920_chapter_01.html

Luc De Ghein, MPLS Fundamentals, Ch. 10 AToM

Last updated