2        An outline of IP and MPLS technologies

 

2.1    Introduction

This chapter contains the background information needed on the technologies affected in this thesis. It describes an outline needed to later understand how Operation and Maintenance (OAM) is solved within the different technologies.

 

The various views of the term OAM, the reason why it is needed on a network and how the term will be used in this thesis is explained. To provide a basic understanding of large backbone networks, an outline to their structure and link protocols, is carried out.

 

The IP architecture is more or less generally known, still a basic understanding of forwarding mechanisms and routing has been emphasized. This makes a deeper foundation of how routers forward packets through the IP network and their use of addresses.

 

A more thorough presentation of the MPLS architecture is needed compared to IP, since this technology is new for most people. The control component communicates with other routers to build up paths between routers in the network. These paths are distinguished by a label. The forwarding component reads the small labels in the incoming MPLS packet header, and forwards the packet on its corresponding paths.

2.2    OAM and backbones in general

Not forgetting that this thesis mainly covers OAM on MPLS, it is included some aspects of the various elements that often are connected to OAM and MPLS. The term OAM and how different organizations define it is presented. While many views on the theme OAM exist, an OAM definition for this thesis is carried out.

 

For the time being, MPLS is a technology that will mostly be used in backbone networks. Therefore, the structure of backbones and the different protocols, ATM and Ethernet, is described.

2.2.1     What is OAM?

There are several different definitions of what OAM is. Some understand this abbreviation to be Operation and Maintenance [17] and others understand it as Operation, Administration and Maintenance. While the meaning of the letters OAM is discussed, it is more important to get an overview of the different views on the term.

 

Thomas D. Nadeau has expressed the variety of views. Specific networking technologies generally have one or more approaches for satisfying OAM requirements. Different approaches sometimes exist within the same networking technology too.

 

According to on of the documents at ITU-T, OAM should take care of the need for ease of operation, the need for verifying network performance and the need to reduce operational costs. OAM mechanisms are especially important for networks that are required to deliver packets according to the requirements defined by the customers. These mechanisms should also try to take actions against defects in lower-layers that may not have taken appropriate actions. Typically, OAM is not only for preventing errors, it should also permit rapid diagnosis and localization of defects. This will in the end improve the availability. [20]

 

The view of Carrie D. Harris is a little different. She says that if one has a node or link failure, one will need a report of services that are successfully carried out, and those that are not. The services that are not successfully carried out need to be placed into an alarm state. Smart systems with good integration will auto launch a network generated id for the corresponding error. Not only that, but these events are stored in a log for historical analysis. OAM is about alarms, performance thresholds, and fault isolation logic.

 

According to Mr. Ganesh, OAM is a component that helps in Operation, Administration and Maintenance of a communication system, this way OAM can be thought of as a component that monitors the health of the system and gives us indications if something is wrong with the system.

 

ITU-T’s OAM definition for the B-ISDN describes another view [17]:

·        Performance monitoring produces maintenance information, allowing estimation of the network reliability.

·        Defects and failures are detected by periodic checking. As a result, maintenance event information or various alarms will be produced.

·        System protection by blocking or changeover to other entities, thus excluding the failing entity from operation.

·        Defect information is given to management entities.

 

Maintenance events are for example defects, failures and performance impairments [17]. Operation is not generally defined. But, at least to our knowledge, operation is a term that covers how one can operate a network. This might include terms like traffic engineering or ad hoc mechanisms like Ping.

 

As one can learn from ITU-T, the various OAM needs are dependent on how the system works, and how one wish to operate and manage the system. One can discuss if administering a network should be added to the OAM definition. Administration functions may need logics that cannot be provided by the network itself, since these functions may be dependent on human interaction. However, if one operate and manage a system, one can also say that the system is administered. Thus the administrative part can still be included in the OAM definition. Strictly speaking, ITU-T has excluded Administration in their version of the Y.1710 [20] and Y.1711 [21] documents. One might ask why they still include the ‘A’ in OAM, but everybody have started using OAM and it has become a general term for those who are working with this.

 

Since the definition of OAM may be vague, there is a need for a definition of OAM used in this thesis: Operation and Maintenance (OAM) is a term that covers how one gets an overview of the network performance and its traffic behaviour, the networks detection of errors and how they are handled, and the discovery of inconvenient configurations.

The mailman example

Every time the mailman goes on his round trip, one can expect that the post will reach its destination. However, sometimes accidents might happen, or the mail might get delivered at a neighbors’ house. The incidents that might make the receiver worried of missing mail are endless.

 

A network router that communicates on a network can be compared to a post-office that sends out mailmen with their mail. This mail can be compared to the network packets with their packet load. Of course these post-offices are, like routers, interconnected through a bigger network of post-offices.

 

Consider this scenario if a postal system was a network with no OAM functions. The mailmen could be compared with network packets carrying their packet load following orders from their postal offices (compared to network routers) that send the mailmen on a mission along the different routes in their town (compared to network cables). These robot-like mailmen would still send out their letters and they would go home after a successful day. Everything would probably be fine. But if they encountered a traffic jam or an accident, they would still go home, because this was their program. Mail might also get delivered to wrong post-offices, and still the robot-mailmen would try delivering the mail. The mail-packets would eventually get lost if the programs of these robot-mailmen were not including any OAM functions.

 

If we should transform this system to the network-semantics, this system would obviously need OAM functions; it needs a way to detect errors and it needs a way to monitor and manage the postal system. What we really would want is that the robot-mailmen would react as in real life. They would have a program for looking at mail-packets and report errors if they discover incorrectly delivered packets, or, if they encountered a traffic jam or an accident, did not get any packets at all delivered. However, today, the network packets themselves cannot have a program running on them, and the solution is to define the OAM functions in a router protocol. This protocol need to define how the packets are sent and it would need basic failure detection. Thus, the routers are the one that need error and reporting mechanisms within a network, and this is what today’s OAM is all about.

The non-technical side of OAM

When most people think about network management, several things come to mind. These are likely to include routing protocols and tables, SNMP management stations, cables and so forth. Often, though, they fail to consider some of the more unnoticeable or non-technical components of network management. [12]

 

Figure 1: Reachability in Networks

If Figure 1 is an environment of a small computer network, then both device A, B and C can reach each other using signals like Ping to check if the connection is okay. However, if computer A can reach both B and C, one can assume that B and C can reach each other. Of course, this depends on the link between B and C, or other issues.

 

It is important to plan how to prevent that errors occur. However, if one do not understand how a network will react, or forget to consider the consequences of these activities to the entire network, one will find tasks like troubleshooting to be difficult. [12]

2.2.2     A short introduction to backbones

A large backbone can be defined as a collection of high-bandwidth links that has a number of routers throughout a larger geographical area, maybe as large as between continents. The bandwidth must be high for supporting all the traffic that goes through the backbone.

 

The location of the backbones have been chosen to distribute data traffic between areas with high demands, and the local service providers connected to the backbone have to deal with the final distribution to the customers. [37]

 

The Figure 2 shows the backbone of KNPQuest in Europe, and how they have designed the high-bandwidth fiber cables between their nodes.

 

Figure 2 The European backbone network of KPNQuest [24]

The figure shows that almost all routing points in this backbone have a back-up route in case a line has an error or similar. This is also typical for larger backbone networks. If one line is cut it could affect a very large amount of users around the world, as a lot of lines are gathered at the main router points. When a backbone is given this kind of back-up routes, the backbone has a high degree of redundancy and a high reliability.

Perspectives of backbones

The backbone can be viewed from various perspectives. At the local perspective, a backbone is a cable or connection that local area networks connect to. Then they are connected using a high-bandwidth cable to the next building or similar.

 

On a wider area network, like Internet, a backbone is a larger structure that consists of a higher-bandwidth network that local or regional networks connect to for long-distance connection through various connection points.

ATM versus Ethernet

Large backbone networks often use asynchronous transfer mode (ATM) for their link layer. This is mainly because of its advantage over Ethernet when it comes to distances. Also, ATM provides high-speed data-transport together with a complex subset of traffic-management mechanisms [37]. When ATM switches first became available, there were significant advantages over existing solutions. In particular, switched networks have a big performance over shared-media networks: A single shared-media network has a fixed total bandwidth that must be shared among all hosts, whereas each host get its own dedicated link to the switch in a switched network. [18]

 

Today, Ethernet is on its way to surpass ATM on backbone networks. By using fiber cabling for long distances Ethernet matches the distance of ATM networks, and the speed is increasing every year. The 10 Gigabit Ethernet is the latest Ethernet standard.

 

Initially, network managers will use 10 Gigabit Ethernet to provide high-speed, local backbone interconnections between large-capacity switches. As the demand for bandwidth increases, 10 Gigabit Ethernet will be deployed throughout the entire network, and will include servers, backbone, and campus-wide connectivity. [50]

 

Of course, there will always be a race among network equipment manufacturers to develop improved and faster MPLS routers for the Internet backbone. However, it is up to the future to show what kind of technology is preferred.

 


2.3    Forwarding mechanisms in IP

2.3.1     An overview of the IP architecture

This subchapter begins with the Figure 3, describing how the layered Internet Protocol (IP) stack can be compared to the seven-layer Open Systems Interconnection Reference Model (OSI-RM). The involvement of Application layer is explained in Figure 4 later.

Figure 3: The OSI-RM model and compared to the TCP/IP model. The model is inspired by Figure 1.19 at [18b]

The Internet and ARPANET were around before the OSI architecture, and the experience gained from building them has had a major influence on the OSI reference model. [18b]

 

As the IP packet header has been accepted during the end of the last century, many new services have been programmed for this platform. The Figure 4 provides a descriptive architecture of the packet switched IP. At bottom, IP and its semantics has, of course, never changed. By semantics we are thinking of the control information in a block. For more information, read the IPv4 packet header at [45].

 

Figure 4 The IP architecture with the location of SNMP and ICMP shaded [45]

As one can also see in Figure 4, there exist applications that involves in a lower layer than the application layer. An example of these can be Ping. Thus, one can expand the application layer like one has shown in Figure 3.

2.3.2     Routing and forwarding

Forwarding of packets sent to various destinations is perhaps the most essential part of the Internet. Routing is the act of moving information across an internetwork from a source to a destination. On its way, unless one transfers on the local network, the packet almost always needs to go from one network to another. The process of getting the packets through the various networks is handled by routers.

Routers in general

A router can be specified as one out of a spectrum of devices that may be used to interconnect different data networks [16]. Routers have improved in the latest years. They now have advanced features like traffic monitoring, which one can read using the SNMP protocol.

 

The router determines the next network point in which to send a packet, and then forward it to its destination. The router must be connected to no less than two different networks and decides what destination to route a packet by inspecting the addresses of the packet. This is why a router is located at any gateway where one network is likely to meet another.

 

To make contact with other routers, Internet Engineering Task Force (IETF) has helped standardized the Routing Information Protocol (RIP) for sharing routing information amongst routers. The RIP protocol requires the router to send its entire routing table to its neighbour router every 30 seconds. All routers can be defined to share this routing information, and they all updates within their Management Domain every 30 seconds. After RIP Version 1, this kind of information sharing among routers have improved. One can read more of these new protocols at IETF [23].

IP routing and forwarding

Routing and forwarding have differences. Forwarding is the process of taking a packet from an interface and sending it out on the appropriate output, while routing is the process of building up the tables that allow the correct output for a packet to be determined. [18a]

 

There exist two various methods of routing, direct routing and indirect routing. Indirect routing is when the hosts have to send data through a router to reach another network, while direct routing is when hosts send to another host on the same network. We also have static and dynamic routing. Static is done when the network operator manually configures the forwarding tables on the router. Dynamic routing is when the routers calculates the network number from the packet’s header and finds a proper next hop router to send the packet to. This is mainly done if hosts have random IP addresses.

 

To understand how routing and forwarding works, consider three computers within a small local area network, all on the same IP network segment. They have addresses 128.39.202.*, these are class C addresses, and thus allow up to 254 nodes on the network. The * (star) indicates a number from 0 to 254. Each of the network interfaces has their own 48bit hexadecimal Ethernet Media Access Control (MAC) address, 4A-CE-87-44-4C-2A for example. [47]

 

Figure 5 One network

In Figure 5, consider a network have 3 hosts. If host A wants to send an IP packet to computer C over the Ethernet, A needs to know C’s IP address. The Address Resolution Protocol (ARP) is used for dynamic discovery of these addresses. [46]

 

Direct routing is when the packets are sent on the same network through the use of ARP. The goal of ARP is to enable each host on a network to build up a table of mappings between IP addresses and MAC addresses [18a]. In other words, ARP keeps an internal IP address table and their corresponding Ethernet addresses. If the ARP module does not know C’s IP address, it will broadcast a request packet over Ethernet, and C will respond to A with its IP address. A will update its ARP table and start sending to that IP.

 

Indirect routing is used when a router is used as a gateway between the networks. Note that the word gateway can have other meanings, but in this thesis it describes the router as a door into another network. By adding a router, this is described in Figure 6.

 

Figure 6 Two networks with one router

The Figure 6’s computer R forwards the packets between the networks. To do this, it needs two network interfaces, each listening on one of the networks. If A wants to send a packet to C, it first needs to send the packet to R, which in turn forwards the packet to C. This is done by making A use R’s Ethernet address that is obtained by using ARP, and, more importantly, C’s IP address. [47]

 

Using manually configured routing table is called Static Routing, however this requires that the network interfaces on the network have statically configured IP addresses, and also requires them to not move outside their initiated network. If it is necessary to move a computer outside its initiated network, the routing table needs to be manually updated. An example of configuring routers by command-line utilities is explained in Appendix A.

 

Dynamic routing uses special routing information protocols to automatically update the routing table with other routers in the network that share information. These protocols are grouped according to whether they are Interior Gateway Protocols (IGPs) or Exterior Gateway Protocols (EGP). Interior gateway protocols are used to distribute routing information inside a Management Domain. A Management Domain is a set of routers inside the domain administered by one authority. Examples of interior gateway protocols are Open Shortest Path First (OSPF) (see Appendix) and RIP. See RFC 1716 [11] for more information on IP router operations. [47]

 

Static routing has some enormous advantages over dynamic routing. Chief among these advantages is predictability. Because the network operator computes the routing table in advance, the path a packet takes between two destinations is always known precisely, and can be controlled exactly. Additionally, because no dynamic routing protocol is needed, static routing doesn't impose any overhead on the routers or the network links. For a large network, the bandwidth devoted to routing updates can add up quickly. Finally, static routing is easy to configure on a small network. The network operator simply tells each router how to reach every network segment to which it is not directly attached. [12]

 

Network mask
By using logical bitwise-AND between the netmask and the IP address, the IP protocol can calculate if the target address should be sent to the local network, or through a gateway. When one set up an IP address for the network interface, one also has to specify the netmask. Normally, in Windows 2000, one add a default netmask of 255.255.255.0, which is the most common used netmask. We will not go into detail about how this is done, and how the network number and host number of the IP address is found.

Figure 7 Three networks interconnected Hn=Host Rn=Router [18a]

The scenario in Figure 7 describes three networks interconnected using three different data link types, such as Ethernet (ETH), Fiber Distributed Data Interface (FDDI) and point-to-point (PPP). The routers forward the TCP packets from H1 to H8 as described in Figure 8. As one can see, the IP packets can be sent on various link layer formats and is therefore link-layer independent.

Figure 8 Describing what protocol layers used to connect H1 to H8 in Figure 7. Three routers equal 3 hops from H1 to H8. [18a]

Note that every IP datagram contains enough information to let the network forward the packet to their destination and this address lookup will take some time at every router. However, this makes no need for an advanced setup mechanism to tell the network what to do when the packet arrives. A host sends packets and the network makes its best effort to get them to their desired destination. The “best-effort” part means that if something goes wrong and the packet gets lost, the network does nothing – it made its best effort. Packets can come out of order, or they can be delivered many times, giving some work for protocols at the higher layers. Keeping routers as simple as possible was one of the main goals of IP. It has even been claimed that IP can run over a network that consists of two tin cans and a piece of string. [18a]

 

Datagram Forwarding
A datagram are sent from a source host to a destination host, possibly passing through several routers along the way. Any node, whether it is a host or a router, first tries to establish whether it is connected to the same physical network as the destination. By node we are thinking of a computer or hardware device that communicates on the network. This is done by the bitwise-AND between the netmask and the IP address. If the destination node is not connected to the local network, it needs to send the datagram to a router. In general, each node will have a choice of several routs, and so it needs to pick the best one, or at least one that has a reasonable chance of getting the datagram closer to its destination. The router finds the correct next hop by consulting its forwarding table. The forwarding table is conceptually a list of <NetworkNum, NextHop> pairs, as described in Figure 9. [18a]

Figure 9 Example forwarding table for Router R2 in Figure 7.

In Figure 9 we have an example of how the router R2’s forwarding table would look like in our example scenario. The routers find the network number in the packet header, looks it up in a forwarding table and then send the packet to the next hop. By having reduced amount of information, one achieves scalability in the network. IP introduces a two-level hierarchy, with networks at the top level and nodes at the bottom level of the table. [18a]

 

IPv6 extensions

IPv6 has a much simpler header format then IPv4. Many of the unnecessary functionalities that are in the IPv4 header have been removed from the IPv6 header. This has resulted in a more effective router performance [18a]. The main difference, beyond the 16 bytes destination and source address, is that both the fragmentation and the option fields in the IPv4 header is moved out and placed in extension headers. There are also many other possible extension headers. When extension headers are present, they appear in a specific order [18a]. Another simplification is that the IPv6 header, in contrast to IPv4, always is of constant length.

 

Both the “main” IPv6 header and the extension headers have the NextHeader field. This field contains an identifier of the type of extension header that comes next. The last extension header will be followed by a transport-layer header (e.g. TCP) and the NextHeader field will contain an identifier for that higher-layer protocol [18a].

 

There are six different extension headers and these are [34]:

·        Hop-by-Hop Options

·        Routing

·        Fragment

·        Destination Options

·        Authentication

·        Encapsulating Security Payload

 

The most important header with respect to this thesis is the Routing header. The Routing header is used by an IPv6 source to list one or more intermediate nodes to be "visited" on the way to a packet's destination [34].

 

2.4    The MPLS architecture and its forwarding mechanisms

2.4.1     The MPLS architecture

MPLS is an abbreviation for Multi-Protocol Label Switching and the term multi-protocol has been chosen to stress that the method applies to all network layer protocols, not only IP. MPLS is about gluing connectionless IP to connection-oriented networks [37]. MPLS will also function virtually on any link layer protocol as well. The principle of MPLS is that all packets are assigned a label, and packets are forwarded along a Label Switched Path (LSP) where each router on the path performs forwarding decisions based solely on the contents of the label. The routers have forwarding tables indexed by the value of the incoming label. This is not the case for the IP forwarding table.

 

This technology contributes a variety of new properties to the network architecture on lower layers. Examples are to guarantee a certain level of performance, to route around congested networks or to create IP tunnels for network-based virtual private networks. MPLS has the ability to create end-to-end circuits similar to a virtual circuit in ATM. MPLS also provides specific performance characteristics, such as traffic engineering across any type of transport medium. These opportunities reduce the need for overlay networks and layer two control mechanisms [37].

 

We have already a lot of knowledge about the link layer protocol Ethernet and less knowledge about other protocols like ATM and Frame Relay. It is not necessary to go thoroughly into all link layer protocols that MPLS is compatible with; therefore this thesis concentrates on Ethernet. To explain some various MPLS implementation on the link layer, also a description of MPLS on ATM is carried out.

 

The network layer provides us with less choice. Currently there is only IP mentioned in the various documents surrounding MPLS. Even though MPLS is applied to all network layer protocols, this thesis describes this technology in respect to IP. The main reason is that most literature and specifications for time being almost merely deal with solutions concerning this protocol.

 

The architecture of MPLS is specified in the IETF RFC 3031 [33]. MPLS is referred to as the “shim” layer. “Shim” refers to the fact that MPLS is between layer two and layer three in the OSI-RM model (see Figure 10) and MPLS makes them fit better [37].

Figure 10 The figure illustrate where the label switching protocol is in the OSI model and compared to the TCP/IP model

The basic concept of label switching is very simple. Fore instance let us assume an e-mail message is sent from one user to another. In a best effort network like IP, the method to send this e-mail to its destination is similar to postal mail, assuming one does not use Zip codes and street addresses are unique. The destination address is examined and this address determines how the email is sent to its final destination [6].

 

Label switching is different. Instead of using the whole destination address to make the router decision, a label is associated with the packet. In the postal service analogy, a label value is placed on the envelope as a Zip code and is thereafter used in place of the postal address to route the mail to the recipient [37]. In computer networks, a label is placed in a packet header and the IP packet becomes the payload. The routers will now use the label instead of the IP-address to direct the traffic towards its destination (see Figure.)

Figure 11 : The MPLS functionality

All the routers supporting MPLS is called Label Switch Routers (LSRs). The ingress LSR is where a packet enters the MPLS network. It adds an MPLS header to the IP packet and assigns a label. The egress LSR is where a packet leaves the network, and the MPLS header is removed from the packet. Both ingress- and egress LSRs are edge nodes connecting the MPLS network to other networks. The transit LSR, also called an interior LSR, receives the packet between the MPLS edges and uses the MPLS header to make forwarding decisions. It will also perform label swapping [37].

 

There are two alternative routing mechanisms for MPLS: Hop by hop routing and explicit routing. In the hop by hop routing mechanism, the LSRs create the Label Switch Paths (LSPs) from ingress LSR to egress LSR by using their exchange information from peers. This exchanged information has been stored in the routing table at the LSR. In this way the LSRs construct a suitable path. Explicit routing is a little different. The whole path or sub path for those LSRs to traverse from one edge to the other of the network is explicitly defined at the ingress and the LSP will be constructed according to this specified route.

 

When the LSR performs packet forwarding, it strips off the existing label from the MPLS packet at each hop and uses it as an index in its forwarding table. Once the entry index is found, the LSR applies the outgoing label for this index to the MPLS packet. Thereafter the packet is sent over the interface specified in its forwarding table. MPLS packets belonging to an LSP will be forwarded in the same manner by all the routers along the LSP [37]. Simple forwarding and indexing of forwarding tables increases the speed of the forwarding process inside the MPLS network, which improves delay and jitter characteristics of traffic.

 

MPLS allows a hierarchy of labels, known as a label stack. It is therefore possible to have different LSPs at different levels in the label stack [37]. This functionality increases the scalability of the LSPs. It is also possible to place small LSPs inside larger LSPs. For the labels in the hierarchy, the MPLS-header’s Stack-field (described later) is set to “1” if the label is at the bottom, and set to “0” if it is not at bottom. As an example, consider the scenario shown in the following figure:


Figure 12: An example of a label hierarchy [37]

The routers R1 to R5 belong to two different LSPs. The numbers 1 and 2 are the label stack depth. R1 and R5 are border routers and R2, R3 and R4 are the interior routers. For the purpose of label forwarding, R1 and R5 are peers at the border level and R2, R3, R4 are peers at the interior level. When R1 receives a packet with a label that is one level deep heading for R5, it will swap the packet’s label with a corresponding label that will be used by R5. Also since the packet has to travel through R2, R3 and R4, R1 will push a new label, thus the label stack level depth is now 2. Then we have two LSPs, one is at level 1 from R1 to R5 and the second is at level 2 from R2 to R4 [37].

 

The label header for MPLS is located after the layer 2 header and before the layer 3 header. An example of Layer 2 and 3 headers are Ethernet and IP respectively. The location of the MPLS header and its format is illustrated in Figure 13.

 

Figure 13: The location of the MPLS header and the format of the MPLS header

 

The MPLS “shim” label is 32 long and contains four fields. The MPLS header is illustrated in Figure 13 and contains the following fields [6a]:

 

·        The label field of 20-bits carries the actual value of the MPLS label [37]. The values from 0 to 15 are reserved for special functions but only some of them are yet specified [22]:

­       IPv4 Explicit NULL Label (value 0)

­       Router Alert Label (value 1)

­       IPv6 Explicit NULL Label (value 2)

­       Implicit NULL Label (value 3)

­       OAM Alert Label (value 14)[21]

·        The 3-bits Exp/QoS experimental field can affect the queuing and discard algorithms applied to the packet as it is transmitted through the network [37].

·        The 1-bit Stack (S) field indicates the bottom of the stack when label stacking is being used. S is zero when the label is not at the bottom of the label stack and one when if it is at the bottom of the stack [37].

·        The 8-bits time-to-live (T) field is a copy of the TTL field in the IP header, and is decremented for each hop [37].

 

The “shim” method explained above is used for those layer 2 technologies that cannot accommodate labels in their header. These technologies are most link types except from Asynchronous transfer mode (ATM) and Frame Relay. For ATM and Frame Relay, the labels are carried in their link layer header. In ATM, the label can be carried in either virtual circuit identifier (VCI) or virtual path identifier (VPI) fields of the ATM header. Likewise, for Frame Relay, the label could be carried in Data Link Connection Identifiers (DLCI) field of the Frame Relay header [13].

 

We are increasing our understanding of how MPLS is implemented in ATM, but first a little introduction of ATM. ATM cells consist of a five bytes header and 48 bytes payload. In order to transport messages of greater sizes than 48 bytes that are handed down from layers like IP above, which is usually the case, ATM has to divide the messages into smaller parts. This is called fragmentation. The process of this fragmentation is handled by the ATM Adaptation Layer (AAL), which is placed between layer 2 and 3. The AAL header contains the information needed by the destination to reassemble the fragmented messages.

 

An AAL5 Protocol data unit (PDU) will be divided into parts of 48 bytes and these 48 bytes including an ATM header form an ATM cell. When all the ATM cells that belong to a PDU arrive at the destination or the end of the ATM network, the PDU will be put together again [18].

Figure 14: Encapsulation of labeled packet on ATM link [13a].

When one whish to use encapsulation of MPLS labeled packets on ATM, the whole label stack will be carried in the AAL5 and the top level label will be carried in VCI/VPI filed in the ATM headers (see Figure 14). The reason for carrying labels in both AAL5 PDU and ATM header is mainly the arbitrary depth of label stacks. When the ATM cells reach the end of LSP, they will be reassembled. If there are more labels in the label stack, the AAL5 PDU will be fragmented again, and the label that is on top of the label stack will be put into the VCI/VPI field in the ATM headers. [18]

 

So far we have been using the terms forwarding tables and routing tables about the tables containing forwarding and routing information respectively. The MPLS architecture describes other names for these tables; Label Forwarding Information Base (LFIB) and Label Information Base (LIB). The LIB contains all the label information that the LSR has learned from its neighbors (as said by Sidnie Feit, The Standish Group) next to it, in respect to the frame flow direction. The LFIB uses a subset of the labels contained in the LIB for actual packet forwarding [18b]. A further description of these two tables is performed in sub chapter 2.4.2 and 2.4.3.

 

It is necessary to precisely specify which packets that may be mapped to each LSP. This is done by providing a Forwarding Equivalency Class (FEC) specification for each LSP. The FEC identifies the set of IP packets that may be mapped to that LSP. Each FEC is specified as a set of one or more FEC elements, where each element identifies a set of packets that may be mapped to the corresponding LSP. There are several FEC elements defined; the Address Prefix FEC element is an address prefix of any length from 0 to a full address. An IP address matches the Address Prefix only if that address begins with that prefix. Another FEC element is Host Address. This element is a full host address. Labels will be assigned to the FEC along the whole LSP [8]. The label is not merely depending of the FEC, it can also represent a combination of a packet’s FEC and the packets priority or class of service [33].

2.4.2     The control component

The control component is responsible for distributing routing information among LSRs and the procedures these routers use to convert this information into Label Forwarding Information Bases (LFIBs). These LFIBs will then be used by the forwarding components when forwarding MPLS frames.

 

There is a great deal of similarity between the control component of the conventional routing architecture and the label switching control component. The MPLS control component includes all the functionalities from routing protocols used in conventional control components like OSPF, BGP and PIM. In this sense these control components forms a subset of the label switching control component. To fill the void procedures is needed by which an LSR can [13]:

·        Create bindings between labels and FEC

·        Inform other LSRs of the binding it creates

·        Utilize both mechanisms above to construct and maintain the LFIBs

 

To perform binding between labels and FECs there are two methods. The first type of binding is referred to as local binding and occurs when the router creates a binding for the incoming label locally. The second type of binding, remote binding, is when the router receives label binding information from another LSR that corresponds to the label binding created by other routers.

 

The label switching control component uses both local and remote binding to populate its LFIB with in-and-outgoing labels. To do this, there are two methods that are opposite of each other:

·        Labels from the local binding become ingoing labels and labels from the remote binding are used as outgoing labels (downstream label binding).

·        Labels from the remote binding become ingoing labels and labels from the local binding are used as outgoing labels (upstream label binding).

 

To explain these bindings further, an understanding of what the terms upstream and downstream is needed. The flow of packets is sent from the upstream LSR towards the downstream LSR (see Figure 15).

 

Figure 15: Downstream label binding versus Upstream label binding

The two different label binding methods have been given their name as a consequence of which LSRs, with respect to the flow of packets, that has performed the binding. A label binding is between a label carried in a packet and the particular FEC that the packet belongs to. In Figure 15, the two types of label bindings are illustrated. In the downstream label binding, the outgoing label in the forwarding table is created by the downstream LSR. For the second type of label bindings, the binding is performed by the upstream LSR and therefore is called upstream label binding. This label becomes the incoming label in the forwarding table.

 

The Label Distribution Protocol (LDP) [8] is the most well known mechanism that lets the LSRs distribute FEC label bindings to its LDP peers [37]. But there are also a number of other protocols for label distribution such as BGP, PIM and RSVP. Before two LSRs can establish a LDP connection, they have to do an LSR neighbor discovery. The way this is done is that an LSR periodically multicasts a Hello Message to a well-known UDP port on the all routers on this subnet multicast group. All LSRs listens on this UDP port for the Hello Message and thus learn about their neighbors. When an LSR have learned the address of another LSR by this mechanism, it establishes a TCP connection to that LSR. At this point a bidirectional LDP session can be established between the two LSRs. An example covering how label switching routers get in touch with each other is provided in Appendix C. [13a]

 

Before it is possible to exchange labels, there is a LDP initialization session where, the LSR peers negotiate what allocation mode to use. It exist a number of modes for distributing the FEC label bindings. The two main alternative is downstream-on-demand versus unsolicited downstream. Downstream-on-demand is when a LSR distribute a FEC label binding in response to an explicit request from another LSR while unsolicited downstream is distributing of label bindings without an explicit request from another LSR. Some of those other modes are ordered versus independent LSP control and liberal versus conservative label retention mode. [8]

 

The Label Request Message is used by an upstream LSR, in consequence of a discovered new FEC, to explicitly request the downstream LSR to assign and advertise a label for this FEC. It is always the LSR downstream that must perform the binding for the link upstream. The FEC is transmitted to the downstream LSR in the FEC TLV in the Label Request Message. The receiving LSR should respond with a Label Mapping message with a label mapping for the requested label or with a Notification message indicating why it cannot satisfy the request [8]. The labels are locally significant only, meaning that the label is only useful and relevant on a single link, between adjacent LSRs [37]. The peer will in its turn send a Label Request message to its peer LSR if it does not already have a mapping in its LIB to which is the next hop. The next hop is a field in the LFIB and it describes the next router to forward labeled packets towards the egress LSR. These routers are specified according to the shortest path or least cost path algorithm. In this way the LFIB is populated.

 

The establishment of a LSP explained so far is independent LSP control establishment. In the second method, ordered control LSP establishment, the ingress or egress LSR initiates the LSP setup. Label assignment is controlled in an orderly manner from the egress to the ingress of the LSP [18b]. That is, a Label Request Message must be send to each LSR along the path from its upstream LSR. No label bindings can be allocated before the message has reached the egress LSR. The Label Mapping message can now be send along the path in reversed direction towards the ingress LSR. For each LSR along the path the label binding is allocated and added to its LFIB.

 

Sidnie Feit from The Standish Group has helped us to understand what the LIB is contributing to MPLS. The LIB contains all of the label information that an LSR has learned from its downstream neighbors both on demand and unsolicited. This information can be FEC Address Prefix, Neighbor LSR Identifier, Neighbor's IP address and FECs to label bindings. Since the LIB also contains unsolicited information, there will be entries that are not on the best path and consequently will not be used for forwarding. The LIB is not used to map incoming label to outgoing label.

 

The methods explained so far are control components that enables establishment of forwarding states between adjacent LSRs solely based on information in routing tables or from a management system [27]. But these methods do not have the ability to establish label forwarding state on all the LSRs along an explicit route and the ability to reserve resources along the route. These and some other properties constitute the base of constraint based routing. There are two possible methods to achieve constraint-based LSPs: RSVP Traffic Engineering (RSVP-TE) and constraint-based routing LDP (CR-LDP). These signaling protocols enable MPLS to control the path of a packet by explicitly specifying the intermediate routers [15] and the route is calculated at one point at the edge of the network. The way things are done are fairly similar in both mechanisms, and only one of the methods will therefore be further described.

 

CR-LDP [27] is using the Label Request Message in LDP to establish constraint-based routing. The LDP has been extended with new type-length-values (TLVs) in addition to the common LDP TLVs to accomplish this. TLV is an object description used in several protocols [49]. These new TLVs for LDP is called Constrained-based Routing TLVs (CR-TLV). When one wish to create constraint-based routing LSP (CR-LSP), the Label Request Message must carry at least the LSPID TLV and may carry one or more of the optional CR-TLVs in its Optional Parameters field. The LSPID TLV gives the CR-LSP an identity that can be used for modifying the LSP. When using CR-LDP it is possible to specify explicit routing and what resources to be allocated while LSP establishment.

2.4.3     The forwarding component


The forwarding component entries the Label Forwarding Information Base (LFIB) to find out how to forward the incoming MPLS frames to the next LSR. The LFIB has, as described in chapter 2.4.2, been populated by the control component.

Figure 16: Label forwarding information base (LFIB) structure [18b]

The LFIB maintained by an LSR consists of a sequence of entries, where each entry consists of an incoming label, and one or more subentries. Each subentry consists of an outgoing label, an outgoing interface, and the next hop address (see Figure 16). Different subentries within an individual entry may be more than one subentry in order to handle multicast forwarding. In addition to the information that controls where a packet is forwarded, an entry in the forwarding table may include the information related to what resources the packet may use. This information can be for example which particular outgoing queue that the packet should be placed. [13]

 

A LSR could maintain either a single forwarding table or a forwarding table for each of its interfaces. With the first alternative, handling of a packet is determined solely by the label carried in the packet. When the second alternative is used, handling of a packet is determined not just by the label carried in the packet but also by the interface that the packet arrives on. An LSR may use either the first or the second option, or a combination of both. [13]

 

One important property of the forwarding algorithm used by label switching is that an LSR can obtain all the information needed to forward a packet as well as to decide what resources the packet may use in just one memory access. This is because [13]:

a)      An entry in the forwarding table contains all the information needed to forward a packet as well as to decide what resources the packet may use.

b)      The label carried in the packet provides an index to the entry in the forwarding table that should be used for forwarding the packet.

The ability to obtain both forwarding and resource reservation information in just one memory access makes label switching suitable as a technology for high forwarding performance. [13]

2.4.4     An example of forwarding

The routing example below illustrates the basic operation of MPLS in support of unicast routing. Using conventional IP routing protocols and LDP, the Label Switching Routers (LSRs) build up routing tables supplemented with labels called label information bases (LIBs). In Figure 17, nodes A, B, C, and D are hosts not configured with MPLS. LSR1 is the ingress LSR, LSR2 is a transit LSR, and LSR3 is the egress LSR [37].

Figure 17: Label swapping and forwarding in MPLS [37].

LSR1 in Figure 17 receives an IP datagram from user node A on interface 0, addressed for node C. LSR1 is the ingress LSR and performs the longest match lookup between the destination address in the datagram and the prefixes in its LIB. Other FEC to label binding procedures in its LIB is performed as well. In this way the initial label for the IP datagram is found and the label header encapsulate the IP datagram. The other forwarding properties, the next hop router and outgoing interface, is looked up in LSR1’s LFIB. The labeled IP datagram is forwarded with label 3 to the next hop LSR , which is LSR2, on output interface 2.

 

When LSR2 receives the packet, only the label header is processed. It strips the label off and uses it as the lookup index in label IN column in its LFIB. The corresponding outgoing label for the incoming label 3 is “7” and replaces the incoming label with this outgoing label in the label header and forwards the labeled packet to LSR3 on interface 2. This is called label switching.

 

The egress LSR processes the only label header as well and looks up the incoming label in its LFIB. LSR3 detects it as the egress of the LSP, when the next hop router is itself, and removes the label header from the incoming packet. The remaining of the packet, which is the same IP datagram packet as LSR1 received, is now forwarded on interface 2 to node C.


MECHANISMS FOR OAM ON MPLS IN LARGE IP BACKBONE NETWORKS (c) 2002 Hallstein Lohne, Johannes Vea, a graduate thesis written for AUC/ERICSSON