Previous PageNext Page

4.3.1 HIPPI-ATM

Three distinct efforts were undertaken in the testbeds to interwork local HIPPI networks with wide area ATM. These efforts were undertaken in part out of necessity, since a device had to be developed in each case to allow interconnection of facilities within the testbed, and in part for the opportunities they presented to investigate high speed interworking issues. A major distinction of these interworking approaches is whether they terminate HIPPI locally, as would be done for example by an IP-based router, or extend HIPPI connections and associated control signaling across the ATM network. The Nectar and Blanca solutions used local termination, while the Vistanet solution used the extension method.

The three HIPPI efforts are represented by the architecture of Figure 4-1B.

Nectar HAS

The HAS design goals included allowing up to two 800 Mbps HIPPI connections at each Nectar site to use the full 2.5 Gbps SONET link bandwidth available at the wide area HAS interface, an architecture design that would allow HIPPI hosts to communicate with non-HIPPI hosts at remote sites, and the ability to add support for wide area ATM network management standards as they evolved.

To maximize flexibility and allow interworking of different local area technologies, HIPPI connections were terminated locally by the HAS and HIPPI header information stripped from packets before sending them into the ATM network. PVCs were used in the testbed prototypes, with a PVC pre-established for each pair of hosts needing to communicate across the ATM network. A Management and Signaling Processor (MSP) module in the HAS allowed mappings to be established in HAS tables between VCIs and local HIPPI identifiers, and provided a means for later incorporation of SVC signaling standards.

The HAS supported AAL types 1 and 3/4 (AAL 5 had not been defined when the HAS design was begun). The data in each packet received from the HIPPI module was segmented and formatted using one of the AAL types and sent as an ATM cell stream over the ATM/SONET link. In the absence of well-defined ATM network flow control standards, a simple open-loop pacing mechanism was used at each transmitting node to prevent steady-state overflow of destination buffers.

A design choice which had a major impact on HAS complexity was the way in which striping over the available SONET channels was handled. As described in the Transmission and Switching section earlier in this report, a choice was made to stripe packets over each STS-3c SONET channel within the HAS, with 8 such channels available for carrying the data sent in an 800 Mbps HIPPI channel. A distinct ATM VC was assigned to each STS-3c channel, with all cells of a packet sent over that channel. The received cells on each VC are stripped of their ATM/AAL overhead and reassembled, and the packet passed by the ATM module to the HIPPI or other local network module for reordering if necessary, eliminating special synchronization and other striping-related processing in the ATM and SONET layers and simplifying the overall HAS design. A maximum of 1024 VCIs were available to each ATM/AAL module.

Prototype HAS devices were installed at CMU and at the Pittsburgh Supercomputer Center for testbed experimentation. Using a HIPPI tester which could generate 7 KByte maximum packet sizes and with striping used over four 155 Mbps, a maximum throughput of 420 Mbps was measured. Based on theoretical calculations, a packet size of 11 Kbytes would give very close to the maximum predicted throughput of approximately 430 Mbps (the maximum potential bandwidth available to data in a 622 Mbps path after AAL, ATM and SONET overheads are allowed for is 496 Mbps).

Blanca HXA

The HIPPI-ATM Adapter (HXA) was developed for use in the Blanca testbed by UIUC in collaboration with AT&T Bell Labs. Its design was driven primarily by immediate Blanca testbed interconnection needs, and thus was more limited in scope than the Nectar effort with respect to ATM network management issues. It did however implement IP-layer processing of packet headers, providing an instance of hands-on experience in this area. Like the HAS, it also provided a direct HIPPI-ATM transfer mode for routerless operation.

The HXA terminated HIPPI connections internally, and was connected to an Xunet switch port on the ATM side via an optical fiber link. A proprietary transmission protocol was used on this link for local transport of ATM cells between an AT&T line card in the HXA and a line card on the Xunet switch. Two simplex physical HIPPI ports were provided by the HXA for communication with one HIPPI host at a time, either through a direct connection or one or more HIPPI switches. Latencies associated with multiplexing among different HIPPI hosts connected through a HIPPI switch to the HXA were determined by the maximum connection time discipline imposed upon hosts, for example breaking connections after each packet.

Routing and header processing functions were done by software running on a RISC microprocessor in the HXA. A table lookup was done to map HIPPI or IP addresses into ATM PVCs and conversely, with HIPPI headers stripped off when an IP header is present in the packet. Each HIPPI or IP packet is encapsulated with an AT&T proprietary AAL5-like protocol (AALX), segmented into cells using AT&T's 54-byte Xunet ATM format, and sent via the optical fiber line card to the Xunet switch.

Up to three 1-KB bursts of a HIPPI packet can be buffered by the HXA in the HIPPI-to-ATM direction. After initial header processing, received bursts are processed as they arrive and the resulting ATM cells sent to the Xunet switch. In the ATM-to-HIPPI direction, two AALX frames can be stored to provide a double-buffered output on the HIPPI side of the HXA, which establishes connections and transfers HIPPI bursts to a host or switch following receipt of a complete AALX frame. Multiplexing of AALX frames being received on multiple VCs is done in the Xunet switch, which provides a maximum-size AALX buffer for each VC supported for a switch output port.

Although burst flow control is used by the HXA between itself and local HIPPI hosts, explicit flow control was not implemented between the HXA and Xunet switch. Rather, TCP was assumed used by endpoints to adapt steady-state flows to available bandwidth and recover from occasional dropped packets, with large buffers provided in the Xunet switches to minimize packet losses for bursty traffic.

Prototype HXAs were deployed at Champaign and Madison Xunet switch sites. A maximum HXA transfer rate of approximately 370 Mbps was achieved using a HIPPI tester at the endpoints, which was sufficient to fill the maximum available user bandwidth on the 622 Mbps Xunet switch-trunk path.

Vistanet NTA

The designers of the Vistanet NTA (Network terminal Adapter) chose not to terminate HIPPI locally at each site, but rather to extend HIPPI connections directly across the intervening ATM/SONET network. This was motivated by a desire to minimize the need to define new protocols and meet the relatively short project schedule. The NTA thus functioned as a very sophisticated three-way HIPPI extender which included AAL4/ATM/SONET wide area network functionality, network control and measurement capabilities, and operation with a Fetex 150 central office ATM switch. A single unstriped 622 Mbps OC-12c SONET link was available in each direction between an NTA and the ATM switch.

A key driver of NTA design was the need to provide flow control of HIPPI packets across the ATM network without relying on end-to-end protocols, since TCP or other transport layer protocols were not expected to be available for one of the key Vistanet application hosts. Since ATM flow control was not well-defined, this was solved by extending HIPPI burst-level flow control across the NTA-NTA paths along with the use of NTA buffering to eliminate the effects of path delays.

Each NTA supported one local HIPPI host and a simultaneous bidirectional ATM connection with each of the other two Vistanet sites. The NTA contained two 32KB receive buffers, each dedicated to one of the other sites, and a single 32KB transmit buffer, allowing a sufficient number of 1KB HIPPI bursts to be buffered to avoid transmission gaps due to roundtrip propagation times and other path latencies. HIPPI connection and flow control signals were carried in the AAL4 headers of the ATM cells, augmented by the use of special ATM cells when necessary to carry out control signaling.

To avoid setup delays, pairs of VCs were pre-established through the switch between each NTA and its two neighbors (a VC pair was required on each simplex data path to allow the return of control information). A host HIPPI connection request would be immediately accepted by the local NTA and data buffered by the NTAs until the destination host HIPPI connection could be established, eliminating the roundtrip setup connection delays that would otherwise occur using HIPPI extenders. HIPPI source routing was used to identify VC mappings and HIPPI destinations via inspection of packet header information.

Implementation of the NTA was partitioned into two major components, a "core NTA" portion which handled the data flows and consisted primarily of hardware, and a control and measurement subsystem (CMS) portion which consisted of a Sun workstation and software. The CMS was connected to the Internet, allowing Vistanet experimenters to configure VCs for their experiments through use of the Fetex 150 switch proprietary SVC protocol supported by the core NTA. More generally, the CMS was used both by system developers to diagnose problems and by researchers to collect traffic measurements for analysis (these topics are discussed further in later sections).

The NTAs were deployed at each of the Vistanet sites and used for Vistanet application experiments and traffic studies. NTA throughputs of approximately 450 Mbps were achieved, which was close to the maximum throughput available on the 622 Mbps SONET link after allowing for SONET, ATM, and AAL4 overhead.

Previous PageNext Page