`Medin, Jr.
`
`USOO6370571B1
`(10) Patent No.:
`US 6,370,571 B1
`(45) Date of Patent:
`Apr. 9, 2002
`
`(54)
`
`(75)
`
`(73)
`
`(*)
`
`(21)
`(22)
`(51)
`(52)
`
`(58)
`
`(56)
`
`SYSTEMAND METHOD FOR DELIVERING
`HIGH-PERFORMANCE ONLINE
`MULTIMEDIA SERVICES
`
`Inventor: Milo S. Medin, Jr., Sunnyvale, CA
`(US)
`Assignee: At Home Corporation, Redwood City,
`CA (US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`Notice:
`
`Appl. No.: 08/811,586
`Filed:
`Mar. 5, 1997
`Int. Cl." ........................ G06F 15/16; G06F 15/173
`U.S. Cl. ....................... 709/218; 709/202; 709/219;
`709/226; 709/248; 709/249; 711/118; 711/122
`Field of Search ................................. 709/200, 203,
`709/219, 218, 249, 251, 202, 210, 247,
`216, 226, 21.3, 238, 277,248; 345/333;
`711/118, 117, 119, 122
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`5,394.182 A 2/1995 Klappert et al. ............., 348/10
`5,511,208 A * 4/1996 Boyles et al......
`... 709/223
`5,608,446 A
`3/1997 Carr et al. ..................... 348/6
`5,727,159 A * 3/1998 Kikinis .........
`... 709/246
`5,734,719 A * 3/1998 Tsevdos et al.
`... 700/234
`5,768,528 A * 6/1998 Stumm .............
`... 709/231
`5,787,470 A * 7/1998 Desimone et al.
`... 711/124
`5,793.980 A * 8/1998 Glaser et al. .....
`... 709/231
`5,802.292 A * 9/1998 Modul ..........
`... 709/203
`5,838,927 A * 11/1998 Gillon et al. ............... 709/247
`5,852,713 A 12/1998 Shannon ........................ 714/6
`5,864,852 A * 1/1999 Luotonen ...
`... 713/201
`5,883,901. A * 3/1999 Chiu et al. .
`... 370/508
`5,898,456. A
`4/1999 Wahl ............................. 348/7
`5,917,822 A * 6/1999 Lyles et al. ................. 335/296
`
`:
`
`:
`
`:
`
`
`
`5,918,013 A 6/1999 Mighdoll et al. ........... 709/217
`5,935,207 A
`8/1999 Logue et al. ............... 709/219
`5.940,074 A * 8/1999 Britt, Jr. et al. ............ 345/333
`5.956,716 A * 9/1999 Kenner et al. ................ 707/10
`5,961,593 A 10/1999 Gabber et al. ......
`709/219
`5,964,891 A 10/1999 Caswell et al. ............... 71.4/31
`6,003,030 A 12/1999 Kenner et al. ................ 707/10
`
`:
`
`:
`
`:
`
`OTHER PUBLICATIONS
`Declaration of Milo S. Medin, see paper 11, whole docu
`ment.
`Baentsch et al., Introducing Application-Level Replication
`and Naming into today's Web, 5th International WWW
`Conference.*
`Baentsch et al., Introducing Application-Level Replication
`and Naming into today’s Eb, Fifth International WWW
`Conference, May 1996.*
`(List continued on next page.)
`Primary Examiner Robert B. Harrell
`Assistant Examiner William C. Vaughn, Jr.
`(74) Attorney, Agent, or Firm-Fenwick & West LLP
`(57)
`ABSTRACT
`Disclosed is a Scalable, hierarchical, distributed network
`architecture and processes for the delivery of high
`performance, end-to-end online multimedia Services, includ
`ing Internet services such as World Wide Web access. The
`network architecture connects a high-speed private back
`bone to multiple network access points of the Internet, to a
`network operation center, to a back office System, and to
`multiple regional Servers in regional data centers. Each of
`the regional Servers connects to Several caching Servers in
`modified head-ends, which in turn connect via fiber optics to
`many neighborhood nodes. Finally, each node connects via
`coaxial cable to multiple end-user Systems. The processes
`include those for replicating and caching frequently
`accessed content, and multicasting content customized per
`region or locality.
`
`17 Claims, 13 Drawing Sheets
`
`REGIONAL
`NETWORK
`
`100
`
`Netflix, Inc. - Ex. 1004, Page 000001
`
`IPR2021-01319 (Netflix, Inc. v. CA, Inc.)
`
`
`
`US 6,370,571 B1
`Page 2
`
`OTHER PUBLICATIONS
`Malpani et al. Making World Wide Web Caching Servers
`Cooperate, Fourth International WWW Conference, Dec.
`1995.*
`Jeffrey et al., Proxy-sharing Proxy Servers, Emerging Tech
`nologies and Applications in Communications, 1996.
`Luotonen et al., World-Wide Web Proxies, May 1994.*
`M. Medin, “Transforming the Net with Broadband Cable
`Data,” Smart Valley Talk, pp. 1-3, Feb. 6, 1996.
`Internet Engineering Task Force. Requirements for Internet
`Hosts-Communication Layers, Request for Comments:
`
`1122 online, retrieved on Jun. 22, 2001). Retrieved from
`the Internet <URL: http://community.roxen.com/develop
`ers/idocs/rfc/rfc.1122.txt>, 107 pages.
`Network Working Group. Multicast Extensions to OSPF,
`Request for Comments: 1584 online, retrieved on Jun. 22,
`2001). Retrieved from the Internet <URL: http://communi
`ty.roxen.com/developers/idocs/rfc/rfc1584.txts, 90 pages.
`Lucien Rhodes, “The Race For More Bandwidth”, Wired,
`Jan. 1996, (pp. 140-145 & 192).
`* cited by examiner
`
`Netflix, Inc. - Ex. 1004, Page 000002
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 1 of 13
`
`US 6,370,571 B1
`
`114
`
`CAN) GAND GAND GAN) GAND GAND
`
`
`
`116 GR)
`118
`GRDC
`120
`CHE)
`CHE)
`(OE)
`(OD
`
`122
`
`(OIE)
`
`GR)
`CRDC)
`
`GNOC) 126
`CBOS)- 128
`
`119
`
`REGIONAL
`NETWORK
`
`CHE)
`S-ad
`(OIE)
`
`124
`
`100
`
`FIG. 1
`
`Netflix, Inc. - Ex. 1004, Page 000003
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 2 of 13
`
`US 6,370,571 B1
`
`
`
`s
`
`ea
`
`s
`H
`S.
`Od
`C
`O
`O
`Sa
`C)
`c
`
`92
`c
`2 k
`
`Netflix, Inc. - Ex. 1004, Page 000004
`
`
`
`U.S. Patent
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 3 of 13
`
`US 6,370,571 B1
`US 6,370,571 BI
`
`N1ISdOL
`
`
`
`jeuoibay
`
`Janias
`
`Z0e
`
`
`
`Jayndwosyjeuoibay
`
`
`
`Jayndwioyjeuoibay
`
`OE
`
`peads-uybiy
`
`YOMS
`
`Ole
`
`ody
`
`bt
`
`Netflix, Inc. - Ex. 1004, Page 000005
`
`Netflix, Inc. - Ex. 1004, Page 000005
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 4 of 13
`
`US 6,370,571 B1
`
`
`
`U
`
`307
`
`01
`
`N1Sd
`
`Netflix, Inc. - Ex. 1004, Page 000006
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 5 of 13
`
`US 6,370,571 B1
`
`
`
`705
`
`Netflix, Inc. - Ex. 1004, Page 000007
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 6 of 13
`
`US 6,370,571 B1
`
`
`
`
`
`
`
`TZ3 JanuêS ÁXOld
`
`Netflix, Inc. - Ex. 1004, Page 000008
`
`
`
`U.S. Patent
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 7 of 13
`
`US 6,370,571 B1
`US 6,370,571 BI
`
`
`
`€0ZJenuasjequeg
`
`
`
`Jayndwiojeyuey
`
`POL
`
`Keiyysiqjeue9
`
`802
`
`
`
`Jayndwio4jejuay
`
`POL
`
`
`
`
`
`
`
`Z°Ols
`
`N1SdOLyal
`
`SON
`
`NV1
`
`COL
`
`8c)OL
`
`QbbOL
`
`Netflix, Inc. - Ex. 1004, Page 000009
`
`Netflix, Inc. - Ex. 1004, Page 000009
`
`
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 8 of 13
`
`US 6,370,571 B1
`
`
`
`757
`
`||
`
`Z01 01
`
`Netflix, Inc. - Ex. 1004, Page 000010
`
`
`
`U.S. Patent
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 9 of 13
`
`US 6,370,571 B1
`US 6,370,571 BI
`
`€06JAMASBDOyoRg
`
`
`
`
`
`
`
`a0iJOyorg
`
`Jayndwo4
`
` y06
`Key¥siq206
`
`806
`
`ajoyoRg
`
`19?ndu00
`Jayndwo4
`
`505
`06
`
`6Sls
`
`NISdOL9Z)
`
`sod
`
`9b}OL
`
`Netflix, Inc. - Ex. 1004, Page 000011
`
`NVI
`
`Netflix, Inc. - Ex. 1004, Page 000011
`
`
`
`
`
`
`
`AoweyJ8NagJazheuy
`
`
`
`alnpowOZOL8LOlL
`
`pOOL=——
`s9IUOejeqabesn 706
`UOI}dUOSNS
`
`
`
`wajshssng
`
`yIOMJON
`
`O/l
`
`0101
`
`U.S. Patent
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 10 of 13
`
`US 6,370,571 B1
`US 6,370,571 BI
`
`
`
`SJeMpJeH
`
`SddIAeq
`
`2001
`
`OlSls
`
`906OL
`
`
`
`JayndwoyaoyjQyoeg
`
`606OL
`
`Netflix, Inc. - Ex. 1004, Page 000012
`
`Netflix, Inc. - Ex. 1004, Page 000012
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 11 of 13
`
`US 6,370,571 B1
`
`End-user
`Requests Content
`from Remote
`Source
`1102
`
`
`
`
`
`Content at
`Nearest Caching
`Server?
`1104
`NO
`
`
`
`
`
`Content
`at Nearest
`Regional Server?
`1108
`
`
`
`
`
`Direct
`COnnection
`to Remote
`Source?
`1114
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Yes
`
`
`
`
`
`
`
`Send Content
`from Nearest
`Caching Server
`to End-user
`1106
`
`Send Content
`from Nearest
`Regional Server
`to Nearest
`Caching Server
`1110
`
`
`
`
`
`Retrieve via
`Direct Connection
`COntent
`from Remote Source
`to Nearest
`Regional Server
`1116
`
`
`
`
`
`
`
`Retrieve Via
`Internet Content
`from Remote Source
`to Nearest
`Regional Server
`1122
`FIG 11
`
`Store Content
`at Nearest
`Caching Server
`1112
`
`Store Content
`at Nearest
`Region Server
`1118
`
`
`
`1100
`
`Netflix, Inc. - Ex. 1004, Page 000013
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 12 of 13
`
`US 6,370,571 B1
`
`
`
`Netflix, Inc. - Ex. 1004, Page 000014
`
`
`
`U.S. Patent
`
`Apr. 9, 2002
`
`Sheet 13 of 13
`
`US 6,370,571 B1
`
`
`
`Netflix, Inc. - Ex. 1004, Page 000015
`
`
`
`US 6,370,571 B1
`
`1
`SYSTEMAND METHOD FOR DELIVERING
`HIGH-PERFORMANCE ONLINE
`MULTIMEDIA SERVICES
`
`I. BACKGROUND TO THE INVENTION
`
`5
`
`1. Technical Field
`This invention relates to the high-performance end-to-end
`delivery of online multimedia Services, including Internet
`services such as World Wide Web (WWW) access. The
`invention combines a Scalable, hierarchical, distributed net
`work architecture and processes for replicating, caching, and
`multicasting.
`2. Description of Related Art
`Cable modems enable an end-user to make a high
`bandwidth connection to a network System. For example,
`using a digital modulation technique called quadrature
`phase-shift keying (QPSK), a downstream connection with
`a bandwidth of about 10 megabits per Second may be made
`by occupying a single 6 MHZ channel out of the 750 MHz
`total coaxial capacity typical in most modern cable televi
`Sion Systems, and an upstream connection with 768 kilobits
`per second may be made by occupying 600 KHZ of that
`capacity. The bandwidth may be increased or decreased by
`occupying more or leSS bandwidth as desired. Other modu
`lation techniques are also available, Such as quadrature
`carrier amplitude modulation (QAM). The technology for
`Such connections is available, for example, from companies
`such as Motorola, the Lancity division of Bay Networks,
`and Hewlett Packard. Unlike telecommunications connec
`tions that use dedicated Switched lines, cable modem con
`nections use a shared medium and So can be continuously
`“on” without Substantial waste of resources.
`Although cable modems provide a practical high-Speed
`connection from the end-user to the network, nevertheless,
`Such a high-speed connection is not enough by itself to
`deliver high-performance online Services, especially with
`regards to Internet services, such as World Wide Web
`(WWW) access. In order to deliver high-performance end
`to-end Internet Service, Solutions are needed to the problems
`of redundant data traffic, unreliable network performance,
`and Scalability.
`The Internet is a publicly accessible internetwork of
`networks. Internet Service Providers (ISPs) provide Internet
`access to businesses and consumerS via points of presence
`(POPs) that are connected to network access points (NAPs)
`which are entry points to the Internet.
`One of the Internet's architectural weaknesses, and the
`cause of many of its current performance issues, is its highly
`redundant data traffic. For example, when an end-user down
`loads a video clip from the popular CNN (Cable News
`Network) Web site, data packets containing bits of the video
`clip are “pulled all the way across the Internet: from the
`CNN WWW server, to CNN's ISP (ISP), through potentially
`Several paths acroSS the Internet including multiple inter
`changes on the Internet backbone, to the end-user's ISP, and
`finally to the end-user's computer System. If the end-user's
`next-door neighbor Soon thereafter requests the very same
`video clip from the CNN Web site, she also pulls the bits of
`the clip all the way across the Internet. The result is that
`many of the same bits are moved over and over again over
`the same communication paths going to CNN's ISP, across
`the Internet, and to the end-user's ISP.
`Another weakness of the Internet is its unreliable perfor
`mance. The Internet performs in an intermittent or otherwise
`unreliable manner due in part to traffic bottlenecks which
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`constrict the flow of data in the system. Unfortunately, there
`is no coherent Scheme to deal with Such bottlenecks because
`of the decentralized nature of the management of the Inter
`net.
`Yet another weakness of the Internet is its lack of security.
`This lack of Security is particularly significant because it
`tends to inhibit electronic transactions and is in part due to
`the public nature of the Internet.
`In order to provide for future growth for a network, it is
`important that the network architecture and operation be
`Scalable to larger size and/or higher speeds. If the architec
`ture is not readily Scalable to a larger size, network perfor
`mance will suffer when the network is expanded. If the
`network is not readily Scalable to higher Speeds, perfor
`mance will Suffer when network traffic increases.
`II. SUMMARY OF THE INVENTION
`The present invention relates to a System and method for
`delivering high-performance online multimedia Services,
`including Internet services such as WWW access, that
`satisfies the above-described needs. The system and method
`combine a Scalable, hierarchical, distributed network archi
`tecture and processes for replicating and caching frequently
`accessed multimedia content within the network, and mul
`ticasting content customized per region or locality.
`The digital network architecture couples a high-Speed
`backbone to multiple network access points (NAPs) of the
`Internet, to a network operation center, to a back office
`System, and to multiple regional data centers. Each regional
`data center couples to Several modified head-ends, which in
`turn couple via fiber optics to many neighborhood optoelec
`tronic nodes. Finally, each node couples Via coaxial cable
`and cable modems to multiple end-user Systems. The archi
`tecture Separates the public Internet from a private network
`with enhanced Security to facilitate electronic transactions.
`The backbone provides a transport mechanism that can be
`readily Scaled to higher speeds. The backbone also enables
`bandwidth to the Internet to be increased, without reconfig
`uring the network Structure, either by increasing the Speed of
`the existing couplings at the NAPS or by adding a new
`coupling to a NAP Finally, the backbone allows service to
`be extended to a new area, again without reconfiguring the
`network Structure, by Simply coupling a new regional data
`center (RDC) to the backbone.
`The network operation center (NOC) is a centralized
`control center which efficiently coordinates the management
`of the privately controlled network. The network manage
`ment system (NMS) server at the NOC coordinates NMS
`clients at the RDCCs. The management of the private
`network enables the optimization of performance. The hier
`archical nature of the management allows consistent System
`configuration and management which results in a high level
`of overall network security and reliability.
`Certain frequently-accessed information or content is
`cached within and replicated amongst the RDCs. This
`reduces traffic redundancy Since an end-user's request for
`data that has been so replicated or cached may be fulfilled by
`the “nearest” (most closely coupled) RDC. In addition, the
`RDCs are able to multicast content that has been customized
`for the region to end-users in the region. This further reduces
`redundant traffic. Finally, the RDCs contain NMS clients
`that monitor and proactively manage network performance
`in the region So that traffic bottlenecks may be identified and
`overcome. The NMS detects and figures out the locations of
`the faults throughout the network, correlates failures, and
`can report faults to the appropriate repair entities, create
`trouble tickets, and dispatch repair crews.
`
`Netflix, Inc. - Ex. 1004, Page 000016
`
`
`
`3
`Frequently-accessed content is also cached within the
`modified head-ends. This further reduces redundant traffic
`because an end-user's request for content that has been So
`cached may be fulfilled by the “nearest” modified head-end.
`Finally, the hierarchical nature of the private network
`architecture enables multicast data to be efficiently custom
`ized for each region receiving the multicast.
`
`III. BRIEF DESCRIPTION OF THE DRAWINGS
`FIG. 1 is a diagram of a Scalable, hierarchical, distributed
`network architecture for delivering high-performance online
`multimedia Services constructed according to a preferred
`embodiment of the present invention.
`FIG. 2 is a diagram of a private backbone and connecting
`routers in a preferred embodiment of the present invention.
`FIG. 3 is a diagram of a regional data center in a preferred
`embodiment of the present invention.
`FIG. 4 is a diagram of a modified head-end in a preferred
`embodiment of the present invention.
`FIG. 5 is a diagram of a regional computer within a
`regional data center in a preferred embodiment of the
`present invention.
`FIG. 6 is a diagram of a caching computer within the
`modified head-end in a preferred embodiment of the present
`invention.
`FIG. 7 is a diagram of a network operations center in a
`preferred embodiment of the present invention.
`FIG. 8 is a diagram of a central computer within a network
`operations center in a preferred embodiment of the present
`invention.
`FIG. 9 is a diagram of a back office system in a preferred
`embodiment of the present invention.
`FIG. 10 is a diagram of a back office computer within a
`back office System in a preferred embodiment of the present
`invention.
`FIG. 11 is a flow diagram of a preferred method for
`providing data requested by a user to their System 124.
`FIG. 12 is a flow diagram of a preferred method of
`replicating data from a content provider.
`FIG. 13 is a flow diagram of a preferred method of
`multicasting content that is customized to region or locality.
`
`IV. DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`The preferred embodiments of the present invention are
`now described with reference to the FIGS.
`FIG. 1 is a diagram of a Scalable, hierarchical, distributed
`network architecture for delivering high-performance online
`multimedia Services constructed according to a preferred
`embodiment of this invention. In the architecture of the
`present invention, the distributed public Internet (top
`portion) 170 is separated from a hierarchical private network
`(bottom portion) 180 under private control.
`A high-speed, private backbone 102 is connected via
`routers (R) 104 to network access points (NAPs) 106 of the
`Internet. In a preferred embodiment of the present invention,
`the private backbone 102 runs asynchronous transfer mode
`(ATM) service over bandwidth leased from commercial
`providers such as MCI Communications, AT&T, or Sprint.
`ATM is a high-speed, cell-based service which allows dif
`ferent types of traffic to be supported at different levels of
`service. The routers 104 are internet protocol (IP) routers
`Such as those commercially developed by Cisco Systems.
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,370,571 B1
`
`4
`The NAPs 106 are access points into the Internet to which
`a number of routers can be connected. NAPS 106 are located,
`for example, in San Francisco, Chicago, and Washington,
`D.C. A typical NAP 106 is a fiber distributed data interface
`(FDDI) ring which connects to one or more tier 1 (national)
`backbones 108 of the Internet, such as the commercially
`operated backbones of Advanced Network & Services
`(ANS), MCI Communications, or Sprint. FDDI is a high
`Speed Token Ring network designed specifically to use
`optical fibers as connecting media.
`Each of these tier 1 backbones 108 connects to one or
`more tier 2 (regional) networks 110, which in turn connects
`to one or more tier 3 (local) networks 112. Finally, each tier
`3 network 112 connects to one or more local area networks
`(LANs) 114. A LAN 114 may include various servers, such
`as, for example, the World Wide Web server which provides
`the popular ESPN SportZone web site for sports informa
`tion. There may also be private peering between networks in
`the same tier. For example, a tier 1 network 108 may have
`a connection to another tier 1 network.
`Note that in FIG. 1 the networks above the NAPs 106 (i.e.
`the tier 1 backbones 108, the tier 2 networks 110, the tier 3
`networks 112, and the LANs 114) are part of the publicly
`accessible Internet 170. Thus, for example, information
`made available on their WWW servers (http servers) may be
`accessed by client computer Systems (http clients) connected
`to the Internet. Of course, FIG. 1 shows only a simplification
`of the complexity of the Internet 170. For example, a tier 1
`network 108 may connect to various dial-up providers to
`which end-users may connect via modems.
`The private backbone 102 is also connected via routers
`116 to one or more regional servers 302 (see FIG. 3) at
`regional data centers (RDCs) 118. Each of the RDCs 118 is
`connected to one or more local servers 402 (see FIG. 4) at
`modified head-ends 120 within a hybrid fiber-coax (HFC)
`distribution system. Each of the local servers 402 at the
`modified head-ends 120 is connected (via fiber optics) to
`many neighborhood optoelectronic (ODE) nodes 122 within
`the HFC distribution system. There are typically over a
`hundred nodes 122 connected to each modified head-end
`120, even though FIG. 1 shows only a few for convenience
`and ease of understanding. Finally, the nodes 122 are con
`nected (via coaxial cable and cable modems) to many
`end-user Systems 124 located typically within people's
`homes or offices. There are typically over a hundred end
`user Systems 124 connected to each node 122, even though
`FIG. 1 shows only a few for convenience and ease of
`understanding.
`In addition, at least one of the routerS 116 connects private
`backbone 102 to a network operations center (NOC) 126 and
`a back office system (BOS) 128. The NOC 126 is the
`centralized control center which efficiently coordinates the
`management of the private network 180. The BOS 128
`includes Software for Subscriber management and billing.
`The NOC 126 and the BOS 128 are also connected together
`So that they can communicate with each other without going
`through the router 116.
`Furthermore, the private backbone 102 connects via an
`additional router 130 to a particular LAN 114 in order to
`give the network 180 more direct access to content on that
`particular LAN 114. The particular LAN 114, for example,
`may be one which houses a Server for a frequently accessed
`commercial WWW site such as the ESPN SportsZone site.
`In such a case, data from that LAN 114 may travel towards
`an end-user 124 either via the Internet 170 (for example, on
`a path through tier 3 112, tier 2, 110, tier 1108, NAP 106,
`
`Netflix, Inc. - Ex. 1004, Page 000017
`
`
`
`15
`
`S
`and router 104) or via the short-cut through the additional
`router 130 which bypasses the Internet 170.
`Finally, the private backbone 102 may peer with another
`private network, such as a tier 1 network 108. This private
`peering is implemented via a connection between the two
`networks. Peering generally involves a coupling between
`two networks on the same hierarchical level.
`Note that in FIG. 1 the networked objects below the NAPs
`106 (i.e. the private backbone 102, the routers 104,116, and
`130, the RDCs 118, the modified head-ends 120, the nodes
`122, the end-user systems 124, the NOC 126, and the BOS
`128) are part of a private network 180 under private control.
`FIG. 2 is a diagram of the private backbone 102 and
`connecting routers 104116, and 130 in a preferred embodi
`ment of this invention. In this embodiment, the private
`backbone 102 is based on an interconnected network of
`Switches 202 capable of Supporting ASynchronous Transfer
`Mode (ATM) service.
`The ATM Service is a high-Speed, cell-based, Switching
`technique which provides bandwidth on-demand. This capa
`bility of the ATM service to provide bandwidth on-demand
`allows each type of traffic to be Supported at an appropriate
`level of Service, and thus makes possible the integration of
`Voice, Video, and data traffic into one network. The physical
`layer under the ATM service (i.e. the connections between
`the ATM Switches 202) is typically provided by Synchro
`nous Optical Network/Synchronous Digital Hierarchy
`(SONET/SDH) technology. Widely supported speeds of
`SONET/SDH currently include 155 Mbps, 622 Mbps, and
`2.488 Gbps.
`The Switches 202 connect via routers 104 to the NAPS
`106. Routers 104 are currently comprised of a commercially
`available Internet Protocol (IP) router and an interface board
`to interface between the ATM service and the IP layer. For
`example, the IP router may be Cisco Systems' model 7505
`router, and the interface board may be an “AIP board that
`connects to the IP router. In effect, the AIP board couples the
`backbone 102 to the IP router. Such a configuration is
`available from Cisco Systems, San Jose, Calif.
`The Switches 202 also connect via routers 116 to the
`high-availability (H/A) regional servers 302 (see FIG. 3) at
`the RDCs 118. These routers 116 also comprise an Internet
`Protocol (IP) router, such as the Cisco 7505 router, and an
`interface board, Such as the AIP board. In addition to
`connecting to the RDCs 118, at least one of these routers 116
`also connects to the NOC 126 and the BOS 128 in order to
`provide a communications channel for network manage
`ment.
`Finally, the Switches 202 may connect via routers 130
`directly to particular LANs 114 in order to give end-user
`Systems 124 more direct access to content on those particu
`lar LANs 114. These routers 130 comprise an IP router, such
`as Cisco System’s 7200 router, and an interface board, such
`as the AIP board.
`FIG. 3 is a diagram of a regional data center (RDC) 118
`in a preferred embodiment of this invention. The RDC 118
`includes a H/A regional server 302, a terminal server 308, a
`high-speed Switch 310, and various blocks 3.04.
`The regional server 302 may include a cluster of com
`60
`puters for high availability and performance. In this
`embodiment, the regional Server 302 comprises two regional
`computers 304 which are both able to access a regional disk
`array 306 via a regional array controller 305. The regional
`computerS 304 may be, for example, based on Servers
`commercially available from Sun Microsystems, and the
`high-Speed connections may be, for example, connections
`
`45
`
`50
`
`55
`
`65
`
`US 6,370,571 B1
`
`25
`
`35
`
`40
`
`6
`based on the Fiber Channel Standard. The regional comput
`ers 304 and the regional disk array 306 may be configured
`such that they provide high availability to one of the various
`RAID levels. In RAID (Redundant Array of Independent
`DiskS) Level 1, redundancy is provided by mirroring data
`from one drive to another. In RAID Level 5, data is stored
`acroSS multiple drives, parity is generated, and parity is
`distributed across the drives in the array 306. RAID Levels
`are well known in the computer industry.
`The two regional computers 304 each have a connection
`320 to the terminal server (TS) 308. The terminal server 308
`connects via a modem to the public Switched telephone
`network (PSTN) to provide an alternative backup commu
`nication and control channel between the RDC 118 and the
`NOC 126. A terminal server is generally a computer capable
`of either input or output to a communication channel. Here,
`the terminal server 308 is capable of both receiving input
`from and sending output to the PSTN.
`The regional computerS 304 also each have a connection
`322 to the high-speed switch 310. These connections 322
`may be made, for example, using 100 BaseT Ethernet
`(which is well known in the industry and can transfer data
`at 100Mbps), and the high-speed Switch 310 may be capable
`of Switching data at gigabit per Second Speed.
`The high-speed switch 310 has a connection via one of the
`routers 116 to one of the ATM Switches 202 of the private
`backbone 102. The high-speed switch 310 also has one or
`more connections via blocks 314 to modified head-ends 120
`or to a regional network 119 (which in turn connects to
`several modified head-ends 120). Each block 314 may
`comprise either an ATM Switch, a router, or a point-to-point
`connection, as appropriate, depending on the System to
`which the high-speed Switch 310 is connecting. The blocks
`314 may also have connections to the terminal server 308 as
`shown by line 324.
`FIG. 4 is a diagram of a modified head-end 120 in a
`preferred embodiment of this invention. The modified head
`end 120 includes a caching server 402, a Switch 404, many
`head-end modems 406 and multiplexers 407, a router 408, a
`terminal server (TS) 410, a monitor device 412, and analog
`head-end equipment 414.
`In this embodiment, the caching server 402 comprises two
`interconnected caching computers 403 which may be, for
`example, based on computers commercially available from
`Silicon Graphics Inc. of Mountain View, Calif. Two caching
`computers 403 are used to provide more efficient and robust
`caching Service. For example, the cache may be partitioned
`between the two computers 403 by having data with URLs
`of an odd number of characters being cached at one com
`puter 403 and data with URLs of an even number of
`characters being cached at the other computer 403.
`Moreover, if one computer 403 goes down, then requests
`may be sent (by a JavaScript loaded into the browser) to the
`other computer 403. Thus, caching would continue even
`when one of the two computers 403 are down.
`The Switch 404 may be, for example, a full duplex fast
`ethernet switch. A fill duplex fast ethernet Switch 404 can
`Support data flowing in both directions at the same time (for
`example, between the caching Server 402 and the head-end
`modems 406). The connections between the caching server
`402 and the Switch 404 may be made, for example, using
`100 BaseT Ethernet.
`The head-end modem 406 modulates analog carrier Sig
`nals using the digital data received from the Switch 404 and
`Sends the modulated analog signals to the multiplexer 407.
`The multiplexer 407 sends the modulated analog signals,
`
`Netflix, Inc. - Ex. 1004, Page 000018
`
`
`
`US 6,370,571 B1
`
`15
`
`25
`
`35
`
`40
`
`7
`along with TV signals received from the analog HE
`equipment, downstream to a node 122 of the distribution
`network.
`Conversely, the multiplexer 407 receives an upstream
`modulated analog signal from the node 122 and Sends the
`upstream signal to the modem 406. The modem 406
`demodulates the modulated analog signals received from the
`multiplexer 407 to retrieve digital data that is then commu
`nicated to the Switch 404.
`There is need for typically over a hundred such head-end
`modems 406, one for each of the over a hundred nodes 122
`typically supported by the modified head-end 120. Such a
`head-end modem 406 may be implemented, for example,
`with the LANcity head-end modem from the LANcity
`division of Bay Networks. The LANcity division is located
`in Andover, Mass. Alternatively, communication with the
`end-user System 124 may be asymmetric in that the return
`path from the end-user system 124 may be via the public
`switched telephone network (PSTN) or some other commu
`nication channel.
`The router 408 connects to the Switch 404 and to an RDC
`118 or a regional network 119 (which in turn connects to an
`RDC 118). The router 408 may be implemented, for
`example, using the 7505 router from Cisco Systems, and the
`connection between the router 408 and the fast Switch 404
`may be implemented, for example, using 100 BaseT Ether
`net.
`The terminal server (TS) 410 is connected to the caching
`server 402, the Switch 404, the router 408, and the PSTN.
`The terminal server 410 provides, via the PSTN, an alter
`native backup communication and control channel between
`the modified head-end 120 and the RDC 118 or the NOC
`126.
`The monitor device 412 is a “synthetic load’ saddled onto
`the digital network 180 via the router 408. The monitor 412
`monitors the analog cable television distribution System via
`analog head-end equipment 414. The analog head-end
`equipment 414 typically receives local television (TV) sig
`nals via a terrestrial microwave dish or a Satellite dish. These
`TV signals are fed into the multiplexers 407 and sent, along
`with the modulated analog signals from the cable modems
`406, to nodes 122 of the distribution network. By commu
`nicating with the monitor 412, the NOC 126 of the digital
`network 180 is able to access the analog network manage
`ment gear by “remote control.”
`FIG. 5 is a diagram of a regional computer 304 within the
`RDC 118 in a preferred embodiment of this invention. The
`regional computer 304 includes hardware devices 502 and
`Software devices in a memory module 504 connected by a
`bus system 506.
`The hardware devices 502 include a central processing
`unit (CPU) 508, for example, an Intel 80x86, Motorola
`PowerPC, or Sun SPARC processor, communicating with
`various input/output (I/O) devices, such as a Switch I/O 510
`that connects to the high-speed switch 310, a disk I/O 512
`that connects to the regional array c