throbber
12
`
`The MAGIC Project: From Vision to Reality
`
`Barbara Fuller, Mitretek Systems
`Ira Richer, Corporation for National Research Initiatives
`
`Abstract
`In the MAGIC project, three major components — an ATM internetwork, a
`distributed, network-based storage system, and a terrain visualization applica-
`tion — were designed, implemented, and integrated to create a testbed for
`demonstrating real-time, interactive exchange of data at high speeds among
`distributed resources. The testbed was developed as a system, with special
`consideration to how performance was affected by interactions among the
`components. This article presents an overview of the project, with emphasis
`on the challenges associated with implementing a complex distributed system,
`and with coordinating a multi-organization collaborative project that relied on
`distributed development. System-level design issues and performance measure-
`ments are described, as is a tool that was developed for analyzing perfor-
`mance and diagnosing problems in a distributed system. The management
`challenges that were encountered and some of the lessons learned during the
`course of the three-year project are discussed, and a brief summary of
`MAGIC-II, a recently initiated follow-on project, is given.
`
`GG
`
`igabit-per-second networks offer the promise
`of a major advance in computing and commu-
`nications: high-speed access to remote
`resources, including archives, time-critical
`data sources, and processing power. Over the past six
`years, there have been several efforts to develop gigabit
`networks and to demonstrate their utility, the most notable
`being the five testbeds that were supported by ARPA and
`National Science Foundation (NSF) funding: Aurora,
`BLANCA, CASA, Nectar, and VISTAnet [1]. Each of
`these testbeds comprised a mix of applications and net-
`working technology, with some focusing more heavily on
`applications and others on networking. The groundbreak-
`ing work done in these testbeds had a significant impact
`on the development of high-speed networking technology
`and on the rapid progress in this area in the 1990s.
`It became clear, however, that a new paradigm for
`application development was needed in order to realize
`the full benefits of gigabit networks. Specifically, network-
`based applications and their supporting resources. such as
`data servers, must be designed explicitly to operate effec-
`tively in a high-speed networking environment. For exam-
`ple, an interactive application working with remote storage
`devices must compensate for network delays. The MAGIC
`project, which is the subject of this article, is the first high-
`speed networking testbed that was implemented according
`to this paradigm. The major components of the testbed
`were considered to be interdependent parts of a system,
`and wherever possible they were designed to optimize end-
`
`to-end system performance rather than individual compo-
`nent performance.
`The objective of the MAGIC (which stands for “Multidi-
`mensional Applications and Gigabit Internetwork Consor-
`tium”) project was to build a testbed that could
`demonstrate real-time, interactive exchange of data at
`gigabit-per-second rates among multiple distributed
`resources. This objective was pursued through a multidisci-
`plinary effort involving concurrent development and subse-
`quent integration of three testbed components:
`• An innovative terrain visualization application that
`requires massive amounts of remotely stored data
`• A distributed image server system with performance suf-
`ficient to support the terrain visualization application
`• A standards-based high-speed internetwork to link the
`computing resources required for real-time rendering of
`the terrain
`The three-year project began in mid-1992 and involved
`the participation, support, and close cooperation of many
`diverse organizations from government, industry, and
`academia. These organizations had complementary skills
`and had the foresight to recognize the benefits of collabo-
`ration. The principal MAGIC research participants were:
`• Earth Resources Observation System Data Center, U.S.
`Geological Survey (EDC)1
`• Lawrence Berkeley National Laboratory, U.S. Depart-
`ment of Energy (LBNL)1
`• Minnesota Supercomputer Center, Inc. (MSCI)1
`• MITRE Corporation1
`• Sprint
`• SRI International (SRI)1
`
`The work reported here was performed while the authors were with the
`MITRE Corp. in Bedford, MA, and was supported by the Advanced
`Research Project Agency (ARPA) under contract F19628-94-D-001.
`
`1These organizations were funded by ARPA.
`
`IEEE Network • May/June 1996
`
`0890-8044/96/$05.00 © 1996 IEEE
`
`15
`
`Microsoft Corp. Exhibit 1011
`
`

`

`•••
`
`•••
`
`•••
`
`Image server system
`(Storage and transmission
`of raw image tiles)
`
`Distributed processing
`(Real-time image processing)
`
`Image server system
`(Storage and transmission
`of processed tiles)
`
`Rendering engine
`(Rendering and visualization
`of terrain)
`
`Workstations
`(Over-the-shoulder
`view of terrain)
`
`n Figure 1. Planned functionality of the MAGIC testbed.
`
`Overview of the MAGIC Testbed
`One of the primary goals of the MAGIC project was to
`
`create a testbed to demonstrate advanced capabilities
`that would not be possible without a very high-speed inter-
`network. MAGIC accomplished this goal by implementing
`an interactive terrain visualization application, TerraVi-
`sion, that relies on a distributed image server system (ISS)
`to provide it with massive amounts of data in real time.
`The planned functionality of the MAGIC testbed is depict-
`ed in Fig. 1. Currently, TerraVision uses data processed
`off-line and stored on the ISS. In the future the applica-
`tion will be redesigned to enable real-time image process-
`ing as well as real-time terrain visualization (see the last
`section). Note that the workstations which house the appli-
`cation, the servers of the ISS, and the “over-the-shoulder”
`tool (see subsection entitled “The Terrain Visualization
`Application”), as well as those that will perform the on-
`line image processing, can reside anywhere on the network.
`The MAGIC Internetwork
`The MAGIC internetwork, depicted in Fig. 2, includes six
`high-speed local area networks (LANs) interconnected by
`a wide area network (WAN) backbone. The backbone,
`which spans a distance of approximately 600 miles, is
`based on synchronous optical network (SONET) technolo-
`gy and provides OC-48 (2.4 Gb/s) trunks, and OC-3 (155
`Mb/s) and OC-12 (622 Mb/s) access ports. The LANs are
`based on asynchronous transfer mode (ATM) technology.
`Five of the LANs — those at BCBL in Fort Leavenworth,
`Kansas, EDC in Sioux Falls, South Dakota, MSCI in Min-
`neapolis, Minnesota, Sprint in Overland Park, Kansas, and
`U S WEST in Minneapolis, Minnesota — use FORE Sys-
`tems models ASX-100 and ASX-200 switches with OC-3c
`and 100 Mb/s TAXI interfaces. The ATM LAN at KU in
`Lawrence, Kansas, uses a DEC AN2 switch, a precursor to
`the DEC GigaSwitch/ATM, with OC-3c interfaces. The
`network uses permanent virtual circuits (PVCs) as well as
`switched virtual circuits (SVCs) based on both SPANS, a
`FORE Systems signaling protocol, and the ATM Forum
`User-Network Interface (UNI) 3.0 Q.2931 signaling stan-
`
`• University of Kansas (KU)1
`• U S WEST Communications, Inc.
`Other MAGIC participants that contributed equipment,
`facilities, and/or personnel to the effort were:
`• Army High-Performance Computing Research Center
`(AHPCRC)
`• Battle Command Battle Laboratory, U.S. Army Com-
`bined Arms Command (BCBL)
`• Digital Equipment Corporation (DEC)
`• Nortel, Inc./Bell Northern Research
`• Southwestern Bell Telephone
`• Splitrock Telecom
`This article presents an overview of the
`MAGIC project with emphasis on the chal-
`lenges associated with implementing a
`complex distributed system. Companion
`articles [2, 3] focus on a LAN/WAN gate-
`way and a performance analysis tool that
`were developed for the MAGIC testbed.
`The article is organized as follows. The fol-
`lowing section briefly describes the three
`major testbed components: the internet-
`work, the image server system, and the
`application. The third section discusses
`some of the system-level considerations
`that were addressed in designing these
`components, and the fourth section pre-
`sents some high-level performance mea-
`surements. The fifth (affectionately
`entitled “Herding Cats”) and sixth sections
`describe how this multi-organizational col-
`laborative project was coordinated, and the
`technical and managerial lessons learned.
`Finally, the last section provides a brief
`summary of MAGIC-II, a follow-on project
`begun in early 1996.
`
`Work-
`station
`•••
`Work-
`station
`
`Work-
`station
`•••
`Work-
`station
`
`Fore ATM
`switch
`EDC
`
`Minneapolis, MN
`
`Fore ATM
`switch
`
`BCBL
`Fort Leavenworth, KS
`
`Sioux Falls, SD
`
`Kansas City, KS
`
`Lawrence, KS
`
`Fore ATM
`switch
`MSCI
`
`Fore ATM
`switch
`US West
`
`Fore ATM
`switch
`Sprint
`
`Work-
`station
`•••
`Work-
`station
`
`Work-
`station
`•••
`Work-
`station
`
`Work-
`station
`•••
`Work-
`station
`
`Work-
`station
`•••
`Work-
`station
`
`Gateway
`DEC
`AN2
`switch
`
`KU
`
`SONET OC-48
`SONET OC-12 or OC-3
`Workstations include:
`DEC, SGI, SUN for ISS and
`over-the-shoulder, SGI for
`terrain visualization
`
`n Figure 2. Configuration of the MAGIC ATM internetwork.
`
`16
`
`IEEE Network • May/June 1996
`
`Microsoft Corp. Exhibit 1011
`
`

`

`8-meter resolution
`4-meter resolution
`
`2-meter resolution
`1-meter resolution
`
`Image tiles of
`terrain data
`
`Perspective view
`
`n Figure 3. Relationship between tile resolutions and perspective view.
`(Source: SRI International)
`
`dard. The workstations at the MAGIC sites
`include models from DEC, SGI, and Sun. As
`part of MAGIC, an AN2/SONET gateway with
`an OC-12c interface was developed to link the
`AN2 LAN at KU to the MAGIC backbone [2].
`In addition to implementing the internetwork,
`a variety of advanced networking technologies
`were developed and studied under MAGIC. A
`high-performance
`parallel
`interface
`(HIPPI)/ATM gateway was developed to inter-
`face an existing HIPPI network at MSCI to the
`MAGIC backbone. The gateway is an IP router rather
`than a network-layer device such as a broadband integrat-
`ed services digital network (B-ISDN) terminal adapter,
`and was implemented in software on a high-performance
`workstation (an SGI Challenge). This architecture provides
`a programmable platform that can be modified for net-
`work research, and in the future can readily take advan-
`tage of more powerful workstation hardware. In addition,
`the platform is general-purpose; that is, it is capable of
`supporting multiple HIPPI interfaces as well as other
`interfaces such as fiber distributed data interface (FDDI).
`Software was developed to enable UNIX hosts to com-
`municate using Internet Protocol (IP) over an ATM net-
`work. This IP/ATM software currently runs on
`SPARCstations under Sun OS 4.1 and includes a device
`driver for the FORE SBA series of ATM adapters. It sup-
`ports PVCs, SPANS, and UNI 3.0 signaling, as well as the
`“classical” IP and Address Resolution Protocol (ARP)
`over ATM model [4]. The software should be extensible to
`other UNIX operating systems, ATM interfaces, and
`IP/ATM address-resolution and routing strategies, and will
`facilitate research on issues associated with the integration
`of ATM networks into IP internets.
`In order to enhance network throughput, flow-control
`schemes were evaluated and applied, and IP/ATM host
`parameters were tuned. Experiments showed that through-
`put close to the maximum theoretically possible could be
`attained on OC-3 links over long distances. To achieve
`high throughput, both the maximum transmission unit
`(MTU) and the Transmission Control Protocol (TCP) win-
`dow must be large, and flow control must be used to
`ensure fairness and to avoid cell loss if there are interact-
`ing traffic patterns [5, 6].
`The Terrain Visualization Application
`TerraVision allows a user to view and navigate through (i.e.,
`“fly over”) a representation of a landscape created from
`aerial or satellite imagery [7]. The data used by TerraVision
`are derived from raw imagery and elevation information
`which have been preprocessed by a companion application
`known as TerraForm. TerraVision requires very large
`amounts of data in real time, transferred at both very
`bursty and high steady rates. Steady traffic occurs when a user
`moves smoothly through the terrain, whereas bursty traffic
`occurs when the user jumps (“teleports”) to a new position.
`TerraVision is designed to use imagery data that are locat-
`ed remotely and supplied to the application as needed by
`means of a high-speed network. This design enables Ter-
`raVision to provide high-quality, interactive visualization
`of very large data sets in real time. TerraVision is of direct
`interest to a variety of organizations, including the Depart-
`ment of Defense. For example, the ability of a military
`officer to see a battlefield and to share a common view
`with others can be very effective for command and control.
`Terrain visualization with TerraVision involves two activi-
`ties: generating the digital data set required by the appli-
`
`cation, and rendering the image. MAGIC’s approach to
`accomplishing these activities is described below. Enhance-
`ments to the application that provide additional features
`and capabilities are also described.
`
`Data Preparation — In order to render an image, TerraVi-
`sion requires a digital description of the shape and appear-
`ance of the subject terrain. The shape of the terrain is
`represented by a two-dimensional grid of elevation values
`known as a digital elevation model (DEM). The appearance
`of the terrain is represented by a set of aerial images,
`known as orthographic projection images (ortho-images),
`that have been specially processed (i.e., ortho-rectified) to
`eliminate the effects of perspective distortion, and are in
`precise alignment with the DEM. To facilitate processing,
`distributed storage, and high-speed retrieval over a net-
`work, the DEM and images are divided into small fixed-
`size units known as tiles.
`Low-resolution tiles are required for terrain that is dis-
`tant from the viewpoint, whereas high-resolution tiles are
`required for close-in terrain. In addition, multiple resolu-
`tions are required to achieve perspective. These require-
`ments are addressed by preparing a hierarchy of
`increasingly lower-resolution representations of the DEM
`and ortho-image tiles in which each level is at half the res-
`olution of the previous level. The tiled, multiresolution
`hierarchy and the use of multiple resolutions to achieve
`perspective are shown in Fig. 3.
`Rendering of the terrain on the screen is accomplished
`by combining the DEM and ortho-image tiles for the
`selected area at the appropriate resolution. As the user
`travels over the terrain, the DEM tiles and their corre-
`sponding ortho-image tiles are projected onto the screen
`using a perspective transform whose parameters are deter-
`mined by factors such as the user’s viewpoint and field of
`view. The mapping of a transformed ortho-image to its
`DEM and the rendering of that image are shown in Fig. 4.
`The data set currently used in MAGIC covers a 1200
`km2 exercise area of the National Training Center at Fort
`Irwin, California, and is about 1 Gpixel in size. It is derived
`from aerial photographs obtained from the National Aerial
`Photography Program archives and DEM data obtained
`from the U.S. Geological Survey. The images are at
`approximately 1 m resolution (i.e., the spacing between
`pixels in the image corresponds to 1 m on the ground).
`The DEM data are at approximately 30 m resolution (i.e.,
`elevation values in meters are at 30 m intervals).
`Software for producing the ortho-images and creating
`the multiresolution hierarchy of DEM and ortho-image
`tiles was developed as part of the MAGIC effort. These
`processes were performed “off-line” on a Thinking
`Machines Corporation Connection Machine (CM-5) super-
`computer owned by the AHPCRC and located at MSCI.
`The tiles were then stored on the distributed servers of the
`ISS and used by terrain visualization software residing on
`rendering engines at several locations.
`
`IEEE Network • May/June 1996
`
`17
`
`Microsoft Corp. Exhibit 1011
`
`

`

`Aerial terrain image
`
`Elevation model
`
`possible by precisely aligning the DEM and imagery
`data with a world coordinate system as well as with
`each other.
`A number of buildings and vehicles have been
`created and stored on the rendering engine for dis-
`play as an overlay on the terrain. The locations of
`vehicles can be updated periodically by transferring
`vehicle location data, acquired with a global posi-
`tioning system receiver, to the rendering engine for
`integration into the terrain visualization displays.
`Registration of the user’s viewpoint to a map enables
`the user to specify the area he wishes to explore by
`pointing to it, and it aids the user in orienting him-
`self.
`In addition, an over-the-shoulder (OTS) tool was
`developed to allow a user at a remote workstation to
`view the terrain as it is rendered. The OTS tool is
`based on a client/server design and uses XWindow
`system calls. The user can view the entire image on
`the SGI screen at low resolution, and can also select
`a portion of the screen to view at higher resolution.
`The frame rate varies with the size and resolution of
`the viewed image, and with the throughput of the
`workstation.
`The Image Server System
`The ISS stores, organizes, and retrieves the pro-
`cessed imagery and elevation data required by Ter-
`raVision for interactive rendering of the terrain. The
`ISS consists of multiple coordinated workstation-
`based data servers that operate in parallel and are
`designed to be distributed around a WAN. This
`architecture compensates for the performance limitations
`of current disk technology. A single disk can deliver data
`at a rate that is about an order of magnitude slower than
`that needed to support a high-performance application
`such as TerraVision. By using multiple workstations with
`multiple disks and a high-speed network, the ISS can deliv-
`er data at an aggregate rate sufficient to enable real-time
`rendering of the terrain. In addition, this architecture per-
`mits location-independent access to databases, allows for
`system scalability, and is low in cost. Although redundant
`arrays of inexpensive disks (RAID) systems can deliver higher
`throughput than traditional disks, unlike the ISS they are
`implemented in hardware and, as such, do not support multi-
`ple data layout strategies; furthermore, they are relatively
`expensive. Such systems are therefore not appropriate for
`distributed environments with numerous data repositories
`serving a variety of applications.
`The ISS, as currently used in MAGIC, comprises four or
`five UNIX workstations (including Sun SPARCstations,
`DEC Alphas, and SGI Indigos), each with four to six fast
`SCSI disks on two to three SCSI host adapters. Each serv-
`er is also equipped with either a SONET or a TAXI net-
`work interface. The servers, operating in parallel, access
`the tiles and send them over the network, which delivers
`the aggregate stream to the host. This process is illustrated
`in Fig. 5. More details about the design and operation of
`the ISS can be found in [8].
`
`Design Considerations
`In MAGIC, the single most perspicuous criterion of suc-
`
`cessful operation is that the end user observes satisfactory
`performance of the interactive TerraVision application.
`When the user flies over the terrain, the displayed scene
`must flow smoothly, and when he teleports to an entirely
`
`Aerial terrain image is mapped onto elevation model
`
`Elevation data rendered with orthographic image of terrain
`
`n Figure 4. Mapping an ortho-image onto its digital elevation model.
`(Source: SRI International)
`
`Image Rendering — TerraVision provides for two modes
`of visualization: two-dimensional (2-D) and three-dimen-
`sional (3-D). The 2-D mode allows the user to fly over the
`terrain, looking only straight down. The user controls the
`view by means of a 2-D input device such as a mouse.
`Since virtually no processing is required, the speed at
`which images are generated is limited by the throughput of
`the system comprising the ISS, the network, and the ren-
`dering engine.
`In the 3-D mode, the user controls the visualization by
`means of an input device that allows six degrees of free-
`dom in movement. The 3-D mode is computationally
`intensive, and satisfactory visualization requires both high
`frame rates (i.e., 15–30 frames/s) and low latencies (i.e., no
`more than 0.1 s between the time the user moves an input
`device and the time the new frame appears on the screen).
`High frame rates are achieved by using a local very-
`high-speed rendering engine, an SGI Onyx, with a cache of
`tiles covering not only the area currently visible to the
`user, but also adjacent areas that are likely to be visible in
`the near future. A high-speed search algorithm is used to
`identify the tiles required to render a given view. For
`example, as noted above, perspective (i.e., 3-D) views
`require higher-resolution tiles in the foreground and
`lower-resolution tiles in the background. TerraVision
`requests the tiles from the ISS, places them in memory, and
`renders the view. Latency is minimized by separating image
`rendering from data input/output (I/O) so that the two activ-
`ities can proceed simultaneously rather than sequentially
`(see the section entitled “Design Considerations”).
`
`Additional Features and Capabilities — TerraVision
`includes two additional features: superposition of fixed
`and mobile objects on the terrain, and registration of the
`user’s viewpoint to a map. Both of these features are made
`
`18
`
`IEEE Network • May/June 1996
`
`Microsoft Corp. Exhibit 1011
`
`

`

`Tiles intersected
`by the path of travel
`74
`64
`63
`53
`52
`42
`32
`33
`
`3
`
`4
`Location tiles along path on
`ISS servers/disks
`
`16
`26
`36
`46
`56
`66
`76
`
`17
`27
`37
`47
`57
`67
`77
`
`14
`24
`34
`44
`54
`64
`74
`
`11
`21
`31
`
`13
`23
`33
`43
`53
`63
`73
`
`12
`22
`32
`42
`52
`62
`72
`
`15
`25
`35
`45
`55
`65
`75
`2
`
`Landscape
`represented by
`low-resolution
`tiles
`
`41
`
`Landscape
`represented by
`high-resolution
`tiles
`
`51
`61
`71
`
`Tiled ortho-
`images of the
`landscape
`
`1
`
`different location, the new scene must
`appear promptly. Obtaining such per-
`formance might be relatively straight-
`forward if the terrain data were
`collocated with the rendering engine.
`However, one of the original premises
`underlying the MAGIC project is that
`the data set and the application are
`not collocated. There are several rea-
`sons for this, the most important being
`that the data set could be extremely
`large, so it might not be feasible to
`transfer it to the user’s site. Moreover,
`experience has shown that in many
`cases the “owner” of a data set is also
`its “curator” and may be reluctant to
`distribute it, preferring instead to keep
`the data locally to simplify mainte-
`nance and updates. Finally, it was
`anticipated that future versions of the
`application might work with a mobile
`user and with fused data from multiple
`sources, and neither of these capabili-
`ties would be practical with local data.
`Therefore, since the data will not be
`local, the MAGIC components must
`be designed to compensate for possible
`delays and other degradations in the
`end-to-end operation of the system.
`In order to understand system-level
`design issues, it is necessary to outline
`the sequence of events that occurs when the user moves the
`input device, causing a new scene to be generated. TerraVi-
`sion first produces a list of new tiles required for the scene.
`This list is sent to an ISS master, which performs a name
`translation, mapping the logical address of each tile (the
`tile identifier) to its physical address (server/disk/location
`on disk). The master then sends each server an ordered list
`of the tiles it must retrieve. The server discards the previ-
`ous list (even if it has not retrieved all the tiles on that
`list) and begins retrieving the tiles on the new list. Thus,
`the design for the system comprising TerraVision, the ISS,
`and the internetwork must address the following questions:
`• How can TerraVision compensate for tiles it needs for
`the next image but have not yet been received?
`• How often should TerraVision request tiles from the ISS?
`•Where should the ISS master be located?
`•How should tiles be distributed among the ISS disks?
`•How can cell loss be minimized near the rendering site
`where the tile traffic becomes aggregated and conges-
`tion may occur?
`Missing Tiles
`Network congestion, an overload at an ISS server, or a
`component failure could result in the late arrival or loss of
`tiles that are requested by the application. Several mecha-
`nisms were implemented to deal with this problem. First,
`although the entire set of high-resolution tiles cannot be
`collocated with the application, it is certainly feasible to
`store a complete set of lower-resolution tiles. For example,
`if the entire data set comprises 1 Tbyte of high-resolution
`tiles, then all of the tiles that are five or more levels coarser
`would occupy less than 1.5 Mbyte, a readily affordable
`amount of local storage. If a tile with resolution at, say,
`level 3 is requested but not delivered in time for the image
`to be rendered, then, until the missing level-3 tile arrives,
`the locally available coarser tile from level 5 would be
`
`Path of travel
`
`Parallel retrieval of tiles and
`transmission to application
`
`Tile
`74
`64
`63
`53
`52
`42
`32
`
`5
`
`Server and disk
`S1D1
`S1D2
`S2D1
`S1D1
`S2D2
`S1D2
`S2D1
`
`Server 2
`
`D2
`
`52
`
`ATM
`I/F
`
`Server 1
`D1
`
`74
`
`53
`
`D2
`
`64
`
`42
`
`ATM
`I/F
`
`D1
`
`63
`
`32
`
`Application
`
`6
`
`Network
`
`n Figure 5. Schematic representation of the operation of the ISS. (Source: Lawrence
`Berkeley National Laboratory)
`
`used in place of the 16 level-3 tiles. This substitution mani-
`fests itself by the affected portion of the rendered image
`appearing “fuzzy” for a brief period of time. Temporary
`substitution of low-resolution tiles for high-resolution tiles
`is particularly effective for teleporting because that opera-
`tion requires a large number of new tiles, so it is more
`likely that one or more will be delayed.
`Second, TerraVision attempts to predict the path the
`user will follow, requesting tiles that might soon be need-
`ed, and assigning one of three levels of priority to each tile
`requested. Priority-1 tiles are needed as soon as possible;
`the ISS retrieves and dispatches these first. This set of tiles
`is ordered by TerraVision, with the coarsest assigned the
`highest priority within the set. The reasons are:
`• The rendering algorithm needs the coarse tiles before it
`needs the next-higher-resolution tiles.
`• There are fewer tiles at the coarser resolutions, so it is
`less likely that they will be delayed.
`The priority-2 tiles are those that the ISS should retrieve
`but should transmit only if there are no priority-1 tiles to
`be transmitted; that is, priority-2 tiles are put on a lower-
`priority transmit queue in the I/O buffer of each ISS serv-
`er. (ATM switches would be allowed to drop the cells
`carrying these tiles.) Priority-3 tiles are those that should
`be retrieved and cached at the ISS server; these tiles are
`less likely to be needed by TerraVision. Note that there is
`a trade-off between “overpredicting” — requesting too
`many tiles — which would result in poor ISS performance
`and high network load, and “underpredicting,” which
`would result in poor application performance.
`Finally, a tile will continue to be included in Terra-
`Vision’s request list if it is still needed and has not
`yet been delivered. Thus, tiles or tile requests that are
`dropped or otherwise “lost” in the network will likely be
`delivered in response to a subsequent request from the
`application.
`
`IEEE Network • May/June 1996
`
`19
`
`Microsoft Corp. Exhibit 1011
`
`

`

`For a typical
`MAGIC configura-
`tion, the interval
`between requests
`is currently set at
`200 ms, a value
`that was found
`empirically to yield
`satisfactory
`performance.
`
`Frequency of Requests
`Another trade-off pertains to the frequen-
`cy at which TerraVision sends its request
`list to the ISS. If the interval between
`requests is too large, then some tiles will
`not arrive when needed, resulting in a
`poor-quality display; in addition, the ISS
`will be idle and hence not used efficiently.
`On the other hand, if the interval is too
`short, then the request list might contain
`tiles that are currently in transit from
`servers to the application; this would
`result in poor ISS performance and redun-
`dant network traffic. For a typical MAGIC
`configuration, the interval between
`requests is currently set at 200 ms, a value
`that was found empirically to yield satis-
`factory performance. This value is based
`roughly on the measured latency of the
`ISS (about 100 ms) and on the estimated
`time required for a tile request to travel
`through the network from the TerraVision
`host to the ISS master and then to the
`most distant ISS server, plus the time for
`the tile itself to travel back to the host (perhaps a total of
`50 ms). Additional measurements and analysis are needed
`to more precisely determine the appropriate request fre-
`quency as a function of the performance and location of
`system components and of network parameters.
`Location of ISS Master
`Since tile requests flow from TerraVision to the ISS mas-
`ter and thence to the servers themselves, the time for
`delivering the requests to the servers is minimized when
`the master is collocated with the TerraVision host. Howev-
`er, locating the master with the host is neither desirable
`nor practical for several reasons. The master is logically
`part of the ISS; therefore, its location should not be con-
`strained by the application. Also, an ISS may be used with
`several applications concurrently, by multiple simultaneous
`users of a particular application, or by a user whose host
`may be unable to support any ISS functionality (e.g., a
`mobile user). Moreover, replication of the master would
`introduce problems associated with maintaining consisten-
`cy among multiple masters when the ISS is in a read/write
`environment, as it would be when real-time data are being
`stored on the servers.
`To first order, the delivery time of tile requests is limit-
`ed by the time t
`for a request to travel from TerraVision
`to the ISS server most distant from the TerraVision host.
`Hence, if the master is approximately on the path from the
`TerraVision host to that server, then t will not be much
`greater than when the master and host are collocated. Fur-
`thermore, in the current MAGIC testbed, t
`is much smaller
`than the sum of the disk latency and the network transit
`time. In other words, there is considerable freedom in
`choosing the location of the ISS master. Satisfactory sys-
`tem performance has been demonstrated, for example,
`with the TerraVision host in Kansas City, the ISS master
`in Sioux Falls, and servers in Minneapolis and Lawrence.
`Of course, this conclusion might change if faster servers
`reduce ISS latency considerably, or the geographic span of
`the network were substantially larger.
`Distribution of Tiles on ISS Servers
`The manner in which data are distributed among the
`servers determines the degree of parallelism and hence the
`
`aggregate throughput which can be
`obtained from the ISS. The data place-
`ment strategy depends on the application
`and is a function of data type and access
`patterns. For example, the retrieval pat-
`tern for a database of video clips would be
`quite different from that for a database of
`images. A strategy was developed for a
`terrain visualization type of application
`that minimizes the retrieval time for a set
`of tiles: the tiles assigned to a given disk are
`as far apart as possible in the terrain in
`order to maximize parallelism by minimizing
`the probability that tiles on a request list
`are on the same disk; and on each disk,
`tiles that are near each other in the terrain are
`placed as close as possible to minimize
`retrieval time. Although this was shown to be
`an optimal strategy for terrain path-fol-
`lowing as in TerraVision [9], it was subse-
`quently shown that ISS performance with
`random placement of tiles was only slightly
`worse. This was partly because tile retrieval
`time is much less than the latency in the
`ISS servers and network transit time, and is therefore not
`currently a significant factor in overall performance. Ran-
`dom placement is simpler to implement and is expected to
`be satisfactory for many other applications. However, as
`discussed for the location of the ISS master, this conclu-
`sion may have to be revisited if the performance or the
`geographic distribution of system components changes sig-
`nificantly.
`Avoiding Cell Loss
`When initially implemented, the MAGIC internetwork
`exhibited very low throughput in certain configurations.
`One cause of the low throughput was found to be mis-
`matches between the burst rates of components in the com-
`munications path. Examples of such rate mismatches were:
`• An OC-3 workstation interface transmitting cells at full
`rate across the network to a 100 Mb/s TAXI interface
`on another w

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket