Previous PageNext Page

4.4.1 Direct ATM Connections

Researchers in the Aurora testbed were focused on the use of workstations for gigabit user applications, dictated primarily by their view that workstation-class computers would replace supercomputers as high end platforms in the years ahead. A second major thrust in Aurora was the use of ATM as a local area technology to directly connect workstations to ATM switches. The Bellcore Sunshine ATM prototype switch was developed as part of Aurora's activities and used for the local workstation connections, as well as for wide area switching, through deployment of the switch at multiple Aurora sites. An additional opportunity for direct connections was provided by the VuNet desk area networking technology developed in Aurora by MIT.

A major challenge of the direct ATM approach was dealing with the small ATM cell size at the 622 Mbps link speed available in Aurora, including both cell transmission/reception and especially the segmentation and reassembly (SAR) of cell streams into higher layer protocol units. Several distinct efforts were undertaken to explore this domain: Penn, Bellcore and MIT developed board-level solutions which interfaced to the workstation I/O bus, and MIT also explored a novel coprocessor approach. The Bellcore and initial Penn efforts used SONET transmission (Figure 4-11), while later Penn versions and the MIT VuNet effort used Glink technology.

Figure 4-11. Directly Connected ATM/SONET

A key question for the board-level approaches was how much functionality should be handled by specialized hardware and software on the board itself, and how much should be done by software using the workstation's main processor (while leaving enough processor bandwidth to also run applications).

Processing

The three board approaches represented distinctly different choices in how SAR processing was done. The VudBoard approach developed by MIT for VuNet relegated this functionality to the main workstation processor, with the I/O board used only to transmit and receive cells on its physical layer interface. The other two board approaches both carried out the SAR and ATM layer processing functions on the I/O board. The Penn approach used an all-hardware implementation, while the Osiris board developed by Bellcore used two Intel 80960 processors and onboard software.

The AAL protocol used in VuNet was a modified version of AAL5 in which a simpler checksum was used to ease software processing requirements. The Penn and Osiris boards supported AAL 3/4, which unlike AAL5 includes an AAL header within each ATM cell of the adaptation layer frame.

The MIT coprocessor approach interfaced the network physical layer directly to the registers of a specially designed coprocessor, which was designed to work analogously to that of a floating point coprocessor.. The intent was to use the workstation processor for ATM and SAR processing while avoiding the bottlenecks introduced by the traditional I/O bus architecture [2].

Bus Transfers and Data Movement

While processing requirements could be dealt with in a reasonably straightforward manner through the use of special hardware or I/O board hardware/software provisioning, the workstation bus architectures presented a more formidable obstacle. Two general methods were available for moving data between the I/O interface and main host memory, programmed I/O (PIO) and direct memory transfer (DMA). For the workstations used in Aurora, the DMA choice resulted in higher data transfer rates.

The short ATM cell size and high rates combined to reveal significant latency-oriented shortcomings in the workstation I/O architectures. The VuNet and Osiris approaches were originally implemented using a DEC Turbochannel bus and one ATM cell per transfer, and found their maximum achievable speed constrained by the latencies associated with bus hardware access and transfer control mechanisms.

The transfer of individual cells was necessitated in the VuNet case by its use of the workstation processor and memory for ATM and SAR processing in conjunction with a minimal I/O board implementation. In the Osiris case, while all cell-oriented processing was performed on the I/O board, a choice was made to transfer cell data directly into higher layer protocol buffers in host memory, eliminating additional latencies which would otherwise be incurred if the higher layer packets were first assembled in buffers on the I/O board. However, the per-cell transfer overhead of the bus significantly constrained the achievable sustained transfer rate.

The Penn approach used an IBM RS6000 MicroChannel bus and a linked-list data management architecture on the I/O board which allowed larger multi-cell segments of data to be transferred across the bus. Initial experiments revealed a bottleneck in the operation of the workstation I/O controller, which was replaced with an improved version by IBM later in the project. Of more lasting concern was the interrupt overhead associated with the RS6000 architecture, which led to the choice of a periodic rather than event-driven interrupt design in order to ensure adequate processing bandwidth for applications.

A major bottleneck to achieving gigabit transfer rates common to all of these efforts was that of memory buffer copying by the operating system, and part of the work above included exploring techniques which moved data from the I/O board directly into application memory space. This area is discussed further in later parts of this section.

ATM Performance

With a change to two-cell DMA transfers, use of a later-generation DEC-Alpha workstation with the Turbochannel bus, and operating system changes discussed below, the Bellcore Osiris board achieved a transfer rate of 516 Mbps using the UDP transport protocol. This was the full throughput rate available after AAL/ATM/SONET overheads are subtracted from the 622 Mbps link rate used, but was also near the maximum transfer rate which could be expected from the hardware architecture due to its bus and memory bandwidth constraints.

The initial Penn interface achieved a maximum transfer rate of about 90 Mbps using the UDP protocol and a single 155 Mbps SONET link on the RS6000, and it was estimated that this performance would scale proportionally for a 622 Mbps link. A subsequent implementation of the Penn interface on an HP PA-RISC workstation achieved a transfer rate of 215 Mbps using the TCP protocol and a Glink physical layer [2].

The MIT VuNet DMA interface achieved a maximum application transfer rate of approximately 100 Mbps using a UDP-like transport protocol and a DEC-Alpha Turbochannel workstation. While this was well below the several hundred Mbps rate allowed by the workstation's bus and memory bandwidths, the premise of the VuNet effort was to "ride the workstation processor technology curve", that is to use the simplest specialized hardware interface possible and learn how to architect the software so that the result will naturally scale to higher rates when used with faster processors.

The MIT coprocessor implementation was not completed due to schedule problems, precluding actual measurements. Extensive simulations were carried out to predict its performance relative to other approaches, however, and are discussed in detail in [2].

In summary, the testbed work in this area demonstrated both that direct cell-based interfacing of workstations to gigabit ATM networks was feasible and that some aspects of workstation architectures needed to be redesigned to make it reasonable. In particular, the design space explorations in Aurora laid the groundwork for the subsequent development of high speed ATM SAR chipsets by industry, and provided an understanding of the improvements needed to workstation I/O architectures for gigabit ATM networking.

Previous PageNext Page