Previous PageNext Page

5 Conclusion

The Gigabit Testbed Initiative, by creating a new model for network research, has had a major impact on both education and industry. Bringing together network and application researchers, integrating the computer science and telecommunications communities, creating academia-industry-government research teams, and leveraging government to obatin substantial contributions from industry, all part of a single, orchestrated project spanning the country, provided a new type of research collaboration not previously seen.

The coupling of application and network technology research from project inception was a major step forward for both new technology development and applications progress. Having applications researchers involved from the start of the project allowed network researchers to obtain early feedback on their network designs from a user's perspective, and allowed network performance to be evaluated using actual user traffic. Similarly, application researchers could learn how the network impacts their application designs through early deployment of prototype software. Perhaps most significantly, they could proceed to investigate networked application concepts without first waiting for the new networks to become commercially available.

The coupling of computer network researchers, who have largely come from the field of computer science, with the carrier telecommunications community provided another important dimension of integration. The development of computer communications networks and carrier-operated networks have historically proceeded along two separate paths with relatively little cross-fertilization. The testbeds allowed the two communities to work together, allowing each to better appreciate the problems and solutions of the other.

From a research perspective, the testbed initiative created close collaborations among investigators from academia, government research laboratories, and industrial research laboratories. In addition to participants from leading universities, national laboratories included Lawrence Berkeley Laboratory, Los Alamos National Laboratory, and JPL, and the NSF-sponsored National Center for Supercomputer Applications, Pittsburgh Supercomputer Center, and San Diego Supercomputer Center, while industry research contributors included IBM Research, Bellcore, GTE Laboratories, AT&T Bell Laboratory, BellSouth Research, and MCNC.

Another important dimension of the testbed model was its funding structure, in which government funding was used to leverage significantly larger contributions by industry. A major industry contribution was made by the carriers in the form of SONET and other transmission facilities within each testbed at gigabit or near-gigabit rates. The value of this contribution cannot be overestimated, since not only were such services non-existent at the time, but they would have been unaffordable if they had existed under normal tariff conditions. By creating an opportunity for the carriers to learn about potential applications of high speed networks while at the same time benefiting from collaboration with the government-funded researchers in network technology experiments, the carriers were, in turn, willing to provide new high speed wide-area experimental facilities.

The Initiative resulted in significant technology transfers to the commercial sector. As a direct result of their participation in the project, two of the researchers at Carnegie Mellon University founded a local area ATM switch startup company, FORE Systems. This was the first such local ATM company formed, and provided a major stimulus for the emergence of high speed local area networking products. Applications research in the testbeds also resulted in technology transfers to industry -- examples are the DCABB software developed in the Nectar testbed for distributing large combinatorial problems over a network, which spawned a startup company to commercialize the technology for use in manufacturing, retail distribution and other industries, and the Express metacomputing control software developed in the Casa testbed, which was transferred to the marketplace.

Other technology transfers involved the Hilda HIPPI measurement device, developed as part of the Vistanet effort by MCNC, and the HIPPI-SONET wide-area gateway developed by LANL for the Casa testbed. Both of these systems have been successfully commercialized by the private sector. In other cases, new high speed networking products were developed by industry in direct response to the needs of the testbeds, for example HIPPI fiber optic extenders and high speed user-side SONET equipment. Additionally, major technology transfers occurred through the migration of students who had worked in the testbeds to industry to implement their work in company products.

On a larger scale, the testbeds directly led to the formation of three statewide high speed initiatives undertaken by carriers participating in the testbeds. The North Carolina Information Highway (NCIH) was formed by BellSouth and GTE as a result of their Vistanet testbed involvement to provide a 622 Mbps ATM/SONET network throughout the state. Similarly, the NYNET experimental network was formed in New York state by NYNEX as a result of their Aurora testbed involvement, and the California Research and Education Network (CalREN) was created by Pacific Bell as a result of their Casa testbed participation.

Testbed Results and Technology Trends

While the testbed work spanned a five-year period, its experimental technology base was formed largely by the platform and component technologies available in the 1990-93 timeframe. Moreover, while the testbed final reports generally reflect platform processor capabilities available in 1993 (for example, the first generations of the Alpha workstation processor and of the Paragon supercomputer), most testbed hardware prototypes were actually based on 1990 component technology. Prototypes which interfaced with vendor platforms also constrained platform upgrades of components such as workstation I/O buses, and so the latter were also representative of circa 1990 technology.

Some of the testbed results which are likely to be impacted by platform and transmission technology advances are discussed in the following paragraphs.

Host I/O

Workstation-class platforms used in the testbeds did not support gigabit speeds, even with the hardware and software optimizations introduced by the testbed work. They were constrained primarily by their DRAM memory speed and I/O bus and interrupt architectures, but also by their processing speed when simultaneously handling protocol and application processing . Maximum achievable speeds were on the order of several hundred Mbps.

For platform technology circa 1996, on the other hand, memory speed has improved by about a factor of five through the use of SDRAM, peak I/O bus speed has improved from about 500 Mbps to 2 Gbps (64-bit PCI), and Alpha processor clock speed has increased from 133 MHz to 500 MHz. Since the work by CMU predicted that a factor of four Alpha speed improvement would allow full TCP/IP use of a 622 Mbps ATM/SONET link using only 25% of the processor, 1966 workstation platforms should in fact support gigabit speeds in parallel with application execution. Further, the improvements in memory and I/O bus bandwidths could obviate the need to eliminate operating system copying and for some of the other optimizations required in the testbed platforms.

MPP supercomputers used in the testbeds, while representing state-of-the-art machine architectures, all had difficulty in making gigabit port speeds usable by applications. In some cases this was due to bottlenecks caused by operating system software not developed for high speed network I/O, and in other cases was due to insufficient internal hardware interconnect bandwidth. While new-generation hardware and operating systems can be expected to alleviate these problems, a more difficult one may remain: how to distribute an application over a large set of internal MPP nodes so that data can be moved at high speeds between the application and the network. Some initial solutions to this problem were developed in the testbeds, but much remains to be learned.


Although striping was used for local area distribution over SONET in one of the testbeds, other high speed transmission technologies were available in the 1990-92 timeframe which allowed non-striped gigabit operation over local distances, in particular HIPPI and Glink. Additional commercial technologies have since evolved which provide 622 Mbps and higher speeds for both transmission and switching within the local area. Given also the advances in host platform and component hardware, striping would not appear to be necessary for local area networking at gigabit speeds.

For wide-area transmission, the testbeds made use of the highest speed SONET equipment becoming available in 1990, which in most cases meant that striping over multiple 155 Mbps channels was required to achieve a 622 Mbps user rate. SONET has since become the dominant new carrier technology and equipment speeds have increased, but will the rate of this increase render striping unnecessary for future high speed end-users? There are at least three factors to consider in answering this question.

First, the high cost of wide-area carrier facilities slows the rate at which older equipment is replaced, making it likely that 155 Mbps or lower SONET user access rates will be around for a while. Second, while new SONET equipment can generally support synchronous 622 Mbps user access, overall user demand and cost considerations may result in continued deployment of 155 Mbps or lower user access rates by carriers. Third, even if 622 Mbps user access is widely deployed, applications such as high performance metacomputing might drive up the speeds some users need at a rate faster than that at which wide-area access rates increase.

Another factor which may make striping attractive in the future is the use of optical wavelength division multiplexing (WDM) to exploit the inherent but largely still untapped bandwidth of optical fiber, both in wide and local area environments. New WDM technology is being deployed by carriers which provides 16x2.5 Gbps channels in a single optically-amplified fiber with an aggregate bandwidth of 40 Gbps. Plans for WDM equipment which will provide 40x2.5 Gbps channels, for an aggregate bandwidth of 100 Gbps, have been announced.

Whether striping will be necessary and/or desirable in a WDM environment depends in part on whether individual user requirements increase beyond that of a single WDM channel, on whether carriers deploy WDM facilities, and, in part, on which technology proves the most cost-effective for a given total rate. Given the additional hardware ports needed for a striping solution, the latter is more likely to be invoked in situations pushing the state-of-the-art of expensive high bit-rate hardware or to deal with legacy equipment.


Switches used in the testbeds were dominated by specialized hardware architectures, in contrast to lower speed software packet switches. ATM cell switches ranged from Batcher-Banyan to simple crossbar hardware architectures, and HIPPI switches also consisted of crossbar hardware. This hardware approach is now finding its way into high speed vendor router products for the Internet, with a combination of hardware switching and traditional software switching being introduced to the marketplace.

While the hardware advances discussed above for host I/O might also be applied to some switch processing functions, the short duration of ATM cells at gigabit speeds will likely require that hardware continue to play an important role for ATM, both in switching and in host I/O cell operations. For PTM switching, this question may depend on whether trunk rates continue to rise through advances in TDM technology or will flatten out through use of multiple WDM channels, allowing continued processing advances to make software packet switching an economical alternative.

Network Protocols and Algorithms

While technology advances cannot change the speed of light, they have increased the choices that can be made in congestion control and quality-of-service protocols and algorithms. Testbed results predicted that about a factor of eight was required relative to 1990 processor technology to execute a simplified weighted fair queuing algorithm on a 622 Mbps ATM cell stream, and 1996 processor technology has roughly provided that increase. To the extent that more sophisticated algorithms can help with this problem area, then, processing technology trends should have a positive impact.

Transmission advances could also have an important impact here. The quality-of-service and congestion control problems would be simplified if bandwidth were plentiful and inexpensive, allowing resource contention to be reduced through larger bandwidth allocations. To date this has not occurred, but the future exploitation of optical fiber bandwidth could conceivably bring this situation about if user demand does not increase correspondingly -- a big if, however, given the recent history of computer networking.

Future Research Infrastructure

Since the beginning of computing, the communications dimension has not been able to keep pace with the rest of the field. Among the barriers most often cited are costs of the technology and its deployment over large geographic areas, the regulated nature of the industry, and market forces for applications that could make use of it and sustain its advance. Moreover, most people find it difficult to invest their own time or resources in a new technology until it becomes sufficiently mature that they can try it out and visualize what they might do with it and when they might use it.

A recent National Research Council report [1] includes a summary of the major advances in the computing and communications fields from the beginning of time-sharing through scalable parallel computing, just prior to when the gigabit testbeds described in this report were producing their early results. Using that report's model, the gigabit testbeds would be characterized as being in the early conceptual and experimental development and application phase. The first technologies were emerging and people were attempting to understand what could be done with them, long before there was an understanding of what it would take to engineer and deploy the technologies on a national scale to enable new applications not yet conceived.

The gigabit testbeds produced a demonstration of what could be done in a variety of application areas, and educated people in the research community, industrial sector, and government to provide a foundation for the next phase. Within the federal government, the testbed initiative was a stimulus for the following events:

· The HPCCIT report on Information and Communication Futures identified high performance networking as a Strategic Focus.

· The National Science and Technology Council, Committee on Computing and Communications held a two day workshop which produced a recommendation for major upgrades to networking among the HPC Centers to improve their effectiveness, and to establish a multi-gigabit national scale testbed for pursuing more advanced networking and applications work.

· The first generation of scalable networking technologies emerged based on scalable computing technologies.

· The DoD HPC Modernization program initiated a major upgrade in networking facilities for their HPC sites.

· The Advanced Technology Demonstration gigabit testbed in the Washington DC area was implemented.

· The defense and intelligence communities began to experiment with higher performance networks and applications.

· The NSF Metacenter and vBNS projects were initiated.

· The all-optical networking technology program began to produce results with the potential for 1000x increases in transmission capacity.

To initiate the next phase of gigabit research and build on the results of the testbeds, CNRI proposed that the Government continue to fund research on gigabit networks using an integrated experimental national gigabit testbed involving multiple carriers, with gigabit backbone links provided by the carriers at no cost using secondary (i.e., backup) channels and switches and access lines paid for by the Government and participating sites. However, costs for access lines proved to be excessive, and at the time the Government was also unable to justify the funding needed for a national gigabit network capability -- instead, several efforts were undertaken by the Government to provide lower speed networks.

The role for a national gigabit network within the research community is clear. In the not too distant future, we expect the costs for accessing a national gigabit network on a continuing basis will be more affordable and the need for it will be more evident, particularly its potential for stimulating the exploration of new applications. The results of the initial testbed project have clearly had a major impact on breaking down the barriers to putting high performance networking on the same kind of growth curve as high performance computing, thus enabling a new generation of national and global-scale high performance systems which integrate networking and computing.

Previous PageNext Page