I recently came across a good use for MLPPP in solving a design issue. The problem I faced was that I had several T1 links between a remote site and a single distribution router. The issue was how to ensure fault tolerance, scalability, and ease of configuration. MLPPP seemed to be the ticket. MLPPP is easy to configure as you’ll see below. MLPPP also provides an easy way to scale the network, as routing will not change. Without some sort of link aggregation technique you would have to worry about how to load balance over the available links. Load balancing can be a complex topic, and in some cases – where the number of links is greater than 4 – some routing protocols will not work properly. In addition, links can fail while keeping your routing policies the same – no failover is necessary with MLPPP as long as there are links still in the bundle. Here are a few additional notes on MLPPP.
According to the Cisco SRND on Enterprise QoS, MLPPP is a recommended configuration when trying to solve the above problems, as long as CPU usage can be kept below 75%. The MQC can be applied to the Multilink interface directly, and bandwidth will automatically adjust as links come into/leave the Multilink group.
Along these lines, you should not configure a bandwidth statement on the Multilink interface, since it will not reflect the true bandwidth of the interface as links fail/recover.
The other issue it solves is a reduction in the number of IP addresses necessary to form a working configuration. Instead of using several subnets for all of the T1s, you only have to configure a single subnet for the Multilink interface. You could configure ip unnumbered if you like, although this is not a recommended practice – mainly because it can be difficult to manage.
Here’s a basic configuration:
interface Multilink1
ip address 172.30.1.1 255.255.255.252
encapsulation ppp
ppp multilink group 1
interface serial0/1/0:1
encapsulation ppp
ppp multilink group 1
interface serial0/1/1:1
encapsulation ppp
ppp multilink group 1
And a simple MQC configuration example:
class-map match-all VOICE
match dscp ef
policy-map mypolicy
class VOICE
priority percent 20
class class-default
fair-queue
interface Multilink1
service-policy output mypolicy
router#sho policy-map int
Multilink1
Service-policy output: mypolicy
Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: dscp ef (46)
Queueing
Strict Priority
Output Queue: Conversation 264
Bandwidth 20 (%)
Bandwidth 614 (kbps) Burst 15350 (Bytes)
Now with a T1 disconnected:
router#sho policy-map int
Multilink1
Service-policy output: mypolicy
Class-map: VOICE (match-all)
0 packets, 0 bytes
5 minute offered rate 0 bps, drop rate 0 bps
Match: dscp ef (46)
Queueing
Strict Priority
Output Queue: Conversation 264
Bandwidth 20 (%)
Bandwidth 307 (kbps) Burst 7675 (Bytes)
As you can see, the Bandwidth in the priority queue drops as T1s leave the bundle.
Thanks for sharing info
LikeLike