throbber
(12) United States Patent
`Malcolm et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,103,794 B2
`Sep. 5, 2006
`
`US007103794B2
`
`(54) NETWORK OBJECT CACHE ENGINE
`(75) Inventors: Michael Malcolm, Woodside, CA (US);
`Robert Zarnke, Waterloo (CA)
`(73) Assignee: Cacheflow, Inc., Sunnyvale, CA (US)
`(*) Notice:
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 1381 days.
`
`(21) Appl. No.: 09/093,533
`(22) Filed:
`Jun. 8, 1998
`(65)
`Prior Publication Data
`
`US 2002/0004917 A1 Jan. 10, 2002
`(51) Int. Cl.
`G06F II/00
`
`(2006.01)
`
`(52) U.S. Cl. ............................ 714/4; 709/203: 711/118
`(58) Field of Classification Search ..................... 714/4;
`709/201-203, 212 229; 707/100-104; 711/100–154;
`345/557
`See application file for complete search history.
`References Cited
`U.S. PATENT DOCUMENTS
`
`(56)
`
`9/1995 Salsburg
`5,452440 A
`9, 1995 Nelson et al. .............. 395/650
`5,452,447 A
`5,564,011 A 10, 1996 Yammine et al. ............. T14f15
`5,596.774 A
`1/1997 Dao et al. ................... 395/610
`(Continued)
`FOREIGN PATENT DOCUMENTS
`
`EP
`EP
`EP
`EP
`EP
`EP
`WO
`WO
`
`O359384 A3
`O359384 A2
`O359384 B1
`O481716 A2
`O 836 145 A2
`O 836 145 A2
`WO 97,01765 A1
`WO 9730539
`
`3, 1990
`3, 1990
`3, 1990
`4f1992
`4f1998
`4f1998
`1, 1997
`8, 1997
`
`OTHER PUBLICATIONS
`Dias G. et ali: “A Smart Internet Caching System” Proceed
`ing of the Inet 96 Conference, Montreal, Canada, Jun.
`24-28, 1996.
`
`(Continued)
`Primary Examiner James P. Trammell
`Assistant Examiner—Mary Wang
`(74) Attorney, Agent, or Firm Swernofskya Law Group
`PC
`ABSTRACT
`(57)
`The invention provides a method and system for caching
`information objects transmitted using a computer network. A
`cache engine determines directly when and where to store
`those objects in a memory (such as RAM) and mass storage
`(such as one or more disk drives), so as to optimally write
`those objects to mass storage and later read them from mass
`storage, without having to maintain them persistently. The
`cache engine actively allocates those objects to memory or
`to disk, determines where on disk to store those objects,
`retrieves those objects in response to their network identi
`fiers (such as their URLs), and determines which objects to
`remove from the cache so as to maintain Sufficient operating
`space. The cache engine collects information to be written to
`disk in write episodes, so as to maximize efficiency when
`writing information to disk and so as to maximize efficiency
`when later reading that information from disk. The cache
`engine performs write episodes so as to atomically commit
`changes to disk during each write episode, so the cache
`engine does not fail in response to loss of power or storage,
`or other intermediate failure of portions of the cache. The
`cache engine also stores key system objects on each one of
`a plurality of disks, so as to maintain the cache holographic
`in the sense that loss of any subset of the disks merely
`decreases the amount of available cache. The cache engine
`also collects information to be deleted from disk in delete
`episodes, so as to maximize efficiency when deleting infor
`mation from disk and so as to maximize efficiency when
`later writing to those areas having former deleted informa
`tion. The cache engine responds to the addition or deletion
`of disks as the expansion or contraction of the amount of
`available cache.
`57 Claims, 4 Drawing Sheets
`
`
`
`
`
`112
`COMMUNICATN
`MEDUM
`
`111
`
`
`
`Netflix, Inc. - Ex. 1001, Page 000001
`
`IPR2021-01319 (Netflix, Inc. v. CA, Inc.)
`
`

`

`US 7,103,794 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`7, 1997 Kumar et al.
`5,649,154 A
`5,696,932 A * 12/1997 Smith ......................... T11 118
`5,696,948 A * 12/1997 Cruz et al. .................. 395/550
`5,737,635 A
`4, 1998 Daines et al. ...
`... T10/52
`5,752,022 A
`5, 1998 Chiu et al. ...
`... 395/610
`5,778,168 A * 7/1998 Fuller .......................... T14f18
`5,781,785 A
`7, 1998 Rowe et al. ................ 395/774
`5,787,470 A
`7, 1998 DeSimone et al. ......... T11 124
`5,802,292 A
`9/1998 Mogul ................... 395.200.33
`5,819,045 A 10, 1998 Raman et al.
`5,822.539 A 10, 1998 Van Hoff ............... 395.200.66
`5,822,757 A * 10/1998 Chi ............................ T11 129
`5,826,253 A 10/1998 Bredenberg .................... 707/2
`5,852,717 A 12, 1998 Bhide et al. ........... 395.200.33
`5,860, 106 A
`1/1999 Domen et al.
`5,864,837 A
`1/1999 Maimone ....................... 707/1
`5,864,852 A
`1/1999 Luotonen .................... T13 201
`5,870,765 A
`2, 1999 Bauer et al. ................ TO9,203
`5,870,769 A
`2/1999 Freund ....................... 7O7/5O1
`5,878,218 A
`3, 1999 MaddaloZZO et al. .. 395/200.43
`5,878,223. A
`3, 1999 Becker et al. .............. 709,223
`5,884,046 A
`3, 1999 Antonov ................ 395.200.68
`5,887,151 A
`3, 1999 Raz et al.
`5,892.909 A
`4, 1999 Grasso et al. ............... TO9,201
`5,892,937 A
`4/1999 Caccavate
`5,896,506 A
`4, 1999 Ali et al. ............... 395.200.43
`5,898,833. A
`4, 1999 Kidder ....................... TO9,234
`5,905,999 A
`5, 1999 Liu et al.
`5,913,033. A
`6, 1999 Grout .................... 395/20049
`5,918,013 A
`6/1999 Mighdoll et al. ...... 395/200.47
`5,931,904 A
`8/1999 Banga et al. ............... TO9,219
`5,933,849 A
`8, 1999 Dutta et al. ................. T11 118
`5,935,213 A * 8/1999 Rananand et al. .......... TO9,234
`5,946,682 A
`8, 1999 Wolfe ............................ 707/5
`5.948,062 A
`9/1999 TZelnic ....................... TO9,219
`5,950,205 A
`9, 1999 Aviani, Jr. .................. 707/103
`5,954,795 A
`9, 1999 Tomita et al. .............. TO9.218
`5,961,602 A 10/1999 Thompson et al. ......... 709,229
`5,964,830 A 10, 1999 Durrett ....................... TO9/200
`5,978,848 A * 11/1999 Maddalozzo, Jr. et al. .. 709/227
`6,009,466 A * 12/1999 Axberg et al. .............. TO9.220
`6,012,085 A
`1/2000 Yohe et al. ................. 709/217
`6,012,126 A * 1/2000 Aggarwal et al. .......... T11 133
`6,014,671 A
`1/2000 Castelli et al. .............. 707/101
`6,016,512 A
`1/2000 Huitema ..................... 709/245
`6,026,474. A
`2/2000 Carter et al. ................ T11 202
`6,085,193 A
`7/2000 Malkin et al. ................ 707/10
`6,098,096 A
`8/2000 Tsirigotis et al. ........... TO9,213
`6,209,020 B1 * 3/2001 Angle et al. ................ TO9, 108
`6,223,256 B1
`4/2001 Gaither ....................... T11 134
`OTHER PUBLICATIONS
`Glassman S: “A caching relay for the World Wide Web”
`Computer Networks and Ison Systems, vol. 27, No. 2, Nov.
`1994, p. 165-173.
`Zhimel Jiang et al; “Prefetching links on the WWW, 97
`IEEE International Conference on Communications.
`Towards the Knowledge Millennium. ICC'97. Montreal,
`Que. Canada, Jun. 8–12 1997, pp. 483-489 vol. 1.
`Dingle A, et al: “Web cache coherence' Computer Networks
`and Ison Systems, vol. 28, No. 11, May 1996, p. 907–920.
`Nabeshima M: “The Japan Cache Project: an experiment on
`domain cache'. Computer Networks and Ison Systems, vol.
`29, No. 8-13, Sep. 1997, p. 987–995.
`Wang Z. et al: “Prefetching in World Wide Web”. IEEE
`Globecom 1996, Communications: The Key to Global Pros
`perity. Global Internet’96. London, UK, No. 18–22 N., pp.
`28 32.
`
`
`
`Chinen K. et al: An Interactive Prefetching Proxy Server for
`Improvement of WWW Latency Proceedings of the Inet 97
`Conference, Kuala Lumpur, Malaysia, Jun. 24–27 1997.
`Zheng Wang & Jon Crowcroft, “Prefetching In World Wide
`Web”, Department of Computer Science, University College
`London WC1E 6BT, United Kingdom, 1996.
`Ken ichi Chinen, et al., “An Interactive Prefetching Proxy
`Server For Improvement Of WWW Latency”, Nara Institute
`Of Science And Technology, Japan.
`Alonso, et al., “Dynamic Hierarchical Caching. In Large-S-
`cale Distributed File Systems. Department Of Computer
`Science, Princeton University, Princeton, NJ. 1992.
`Azer Bestavros, “WWW Traffic Reduction And Load Bal
`ancing Through Server Based Caching', Boston University,
`1997.
`Michael Baentsch, et al.: “World Wide Web Caching: The
`Application-Level View Of The Internet, IEEE Commu
`nications Magazine, pp. 170–178, Jun. 1997.
`Syam Gadde, et al.: “Reduce, Reuse, Recycle: An Approach
`To Building Large Internet Caches', pp. 93-98, May 1997.
`A. Ortega, et al., Soft Caching: Web Cache Management
`Techniques For Images, Multimedia Signal Processing, pp.
`475-480, Jun. 1997.
`Banga, G. et al.: “Optimistic deltas for WWW latency
`reduction” Proceedings Of The Usenix 1997 Annual Tech
`nical Conference, Proceedings Of Usenix 1997 Annual
`Technical Conference, Anaheim, CA, USA, Jan. 6-10, 1997,
`pp. 289 303.
`Yu, P. S., et al.: “Performance Study Of A Collaborative
`Method For Hierarchical Caching. In Proxy Servers' Com
`puter Networks And ISDN Systems, NL, North Holland
`Publishing. Amsterdam. Vol. 30, No. 1-7, Apr. 1, 1998, pp.
`215-224.
`Braun, H. et al. “Web Traffic Characterization: an Assess
`ment of the Impact of Caching Documents from NCSA's
`Web Server”. Computer Networks and ISDN Systems. vol.
`28, pp. 37–51. Dated: Dec. 1995.
`“Caching Email Document, May 11, 1999.
`Carignano, A. Ortega et al. “Soft Caching: Web Cache
`Management Techniques For Images, IEEE. 1997.
`Duska Bradley M. et al. The Measured Access Character
`istics of World-Wide Web Client Proxy Caches.
`“High Performance Web Caching White Paper”. CacheFlow
`Inc. 1998.
`Lee, Ron & Gary Tomlinson. “Workload Requirements for
`a Very High-Capacity Proxy Cache Design. pp. 1–6.
`“Microsoft Proxy Server Version 2.0.’, reviewer Guide. pp.
`1 74.
`Rosenfeld, Louis B. “Automated Filtering of Internet Post
`ings”. XP000616769.
`Smith, N. “The UK National Web Cache The State of the
`Art'. Computer Network adb ISDN Systems. Vol. 28. pp.
`1407–1414. Dated: May 1996.
`Gary Tomlinson, Drew Major, Ron Lee. High-Capacity
`Internet Middleware: Internet Caching System Architecture
`Overview.
`
`* cited by examiner
`
`Netflix, Inc. - Ex. 1001, Page 000002
`
`

`

`U.S. Patent
`
`Sep. 5, 2006
`
`Sheet 1 of 4
`
`US 7,103,794 B2
`
`
`
`
`
`112
`COMMUNICATION
`MEDIUM
`
`111
`
`
`
`
`
`PROCESSOR
`
`
`
`NETWORK
`OBJECT
`114
`
`
`
`s
`
`MEMORY storia&E
`
`MASS
`
`103
`
`04
`FIG. 1
`
`Netflix, Inc. - Ex. 1001, Page 000003
`
`

`

`U.S. Patent
`
`Sep. 5, 2006
`
`Sheet 2 of 4
`
`US 7,103,794 B2
`
`CACHE
`
`
`
`MASS
`STORAGE
`
`
`
`MEMORY
`
`i
`
`.
`
`.
`
`WRITE
`POINTER
`
`250
`
`260
`
`BLOCK OBJECT
`MAP
`MAP
`
`HASH
`TABLE
`
`'',
`
`210
`
`210
`
`350
`
`
`
`BLOCK --- 21155
`-
`A
`DESCRIPTOR
`PONTER
`O
`212
`Y
`fo 216 SESECT
`
`213 214, 215
`
`BLOCK
`
`
`
`
`
`BLOCK
`
`F.G. 2
`
`Netflix, Inc. - Ex. 1001, Page 000004
`
`

`

`U.S. Patent
`
`Sep. 5, 2006
`
`Sheet 3 of 4
`
`US 7,103,794 B2
`
`310
`
`
`
`HASH
`FUNCTION
`
`HASH SIGNATURE
`
`330
`
`
`
`
`
`HASH
`BUCKET
`340
`
`
`
`
`
`
`
`
`
`BLOCK
`POINTER
`SECONDARY
`HASHVALUE
`
`FIG. 3
`
`COMMITED
`TO DISK
`
`
`
`MOOFEO
`NRAM
`
`CONTROL
`
`DAA
`
`CONTRO
`
`DAA
`
`FIG. 4
`
`Netflix, Inc. - Ex. 1001, Page 000005
`
`

`

`U.S. Patent
`
`Sep. 5, 2006
`
`Sheet 4 of 4
`
`US 7,103,794 B2
`
`510
`
`500
`
`as a a
`
`
`
`
`
`MODIFY BLOCKMAP
`FOROAA BLOCKS
`
`521:
`
`MODIFY BLOCKMAP
`FOR CTRL BLOCKS
`
`MODIFY BLOCKMAP
`FOR ITSELF
`
`
`
`
`
`WRITEO
`MASS STORAGE
`
`REWRITE
`ROO OBJECT
`
`
`
`524
`
`525
`
`
`
`F.G. 5
`ONMAX
`
`
`
`MINIMUM
`DistNCE
`
`25o 1
`WRITE
`-
`PONTER 620
`260
`WRITE
`REGION DETE
`PONER
`F.G. 6
`
`
`
`s
`
`610
`DELETE
`REGION
`
`
`
`/
`NCREASING
`"iGEs
`630
`COLLECTION
`REGON
`
`Netflix, Inc. - Ex. 1001, Page 000006
`
`

`

`1.
`NETWORK OBJECT CACHE ENGINE
`
`US 7,103,794 B2
`
`2
`store those objects holographically so as to continue opera
`tion Smoothly and recover gracefully from additions to,
`failures of, or removals from, its mass storage.
`SUMMARY OF THE INVENTION
`The invention provides a method and system for caching
`information objects transmitted using a computer network.
`In the invention, a cache engine determines directly when
`and where to store those objects in a memory (such as RAM)
`and mass storage (such as one or more disk drives), so as to
`optimally write those objects to mass storage and later read
`them from mass storage, without having to maintain them
`persistently. The cache engine actively allocates those
`objects to memory or to disk, determines where on disk to
`store those objects, retrieves those objects in response to
`their network identifiers (such as their URLs), and deter
`mines which objects to remove from the cache so as to
`maintain appropriate free space.
`In a preferred embodiment, the cache engine collects
`information to be written to disk in write episodes, so as to,
`maximize efficiency when writing information to disk and so
`as to maximize efficiency when later reading that informa
`tion from disk. The cache engine performs write episodes so
`as to atomically commit changes to disk during each write
`episode, so the cache engine does not fail in response to loss
`of power or storage, or other intermediate failure of portions
`of the cache. The cache engine stores key system objects on
`each one of a plurality of disks, so as to maintain the cache
`holographic in the sense that loss of any Subset of the disks
`merely decreases the amount of available cache. The cache
`engine selects information to be deleted from disk in delete
`episodes, so as to maximize efficiency when deleting infor
`mation from disk and so as to maximize efficiency when
`later writing new information to those areas of disk. The
`cache engine responds to the addition or deletion of disks as
`the expansion or contraction of the amount of available
`cache.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`FIG. 1 shows a block diagram of a network object cache
`engine in a computer network.
`FIG. 2 shows a block diagram of a data structure for
`maintaining storage blocks for a set of cached network
`objects.
`FIG. 3 shows a block diagram of data structures for
`caching network objects.
`FIG. 4 shows a block diagram of a set of original and
`modified blocks.
`FIG. 5 shows a flow diagram of a method for atomic
`writing of modified blocks to a single disk drive.
`FIG. 6 shows a block diagram of a set of pointers and
`regions on mass storage.
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENT
`In the following description, a preferred embodiment of
`the invention is described with regard to preferred process
`steps and data structures. Those skilled in the art would
`recognize after perusal of this application that embodiments
`of the invention can be implemented using general purpose
`processors and storage devices, special purpose processors
`and storage devices, or other circuits adapted to particular
`process steps and data structures described herein, and that
`implementation of the process steps and data structures
`described herein would not require undue experimentation
`or further invention.
`
`BACKGROUND OF THE INVENTION
`1. Field of the Invention
`This invention relates to devices for caching objects
`transmitted using a computer network.
`2. Related Art
`In computer networks for transmitting information, infor
`mation providers (sometimes called “servers') are often
`called upon to transmit the same or similar information to
`multiple recipients (sometimes called “clients') or to the
`same recipient multiple times. This can result in transmitting
`the same or similar information multiple times, which can
`tax the communication structure of the network and the
`resources of the server, and cause clients to suffer from
`relatively long response times. This problem is especially
`acute in several situations: (a) where a particular server is,
`or suddenly becomes, relatively popular; (b) where the
`information from a particular server is routinely distributed
`to a relatively large number of clients; (c) where the infor
`mation from the particular server is relatively time-critical;
`and (d) where the communication path between the server
`and its clients, or between the clients and the network, is
`relatively slow.
`One known method is to provide a device (such as a
`general purpose processor operating under Software control)
`which acts as a proxy, receiving requests for information
`from one or more clients, obtaining that information from
`one or more servers, and transmitting that information to the
`clients in place of the servers. When the proxy has previ
`ously obtained the information from one or more servers, it
`can deliver that information to the client without having to
`repeat the request to the server. While this method achieves
`the goal of reducing traffic in the network and load on the
`server, it has the drawback that significant overhead is
`required by the local operating system and the local file
`system or file server of the proxy. This adds to the expense
`of operating the network and slows down the communica
`tion path between the server and the client.
`There are several sources of delay, caused primarily by
`the proxy's Surrendering control of its storage to its local
`operating system and local file system: (a) the proxy is
`unable to organize the information from the server in its
`mass storage for most rapid access; and (b) the proxy is
`unable to delete old network objects received from the
`servers and store new network objects received from the
`servers in a manner which optimizes access to mass storage.
`In addition to the added expense and delay, the proxy's
`Surrendering control of its storage restricts functionality of
`the proxy's use of its storage: (a) it is difficult or impossible
`to add to or Subtract from storage allocated to the proxy
`while the proxy is operating; and (b) the proxy and its local
`file system cannot recover from loss of any part of its storage
`without using an expensive redundant storage technique,
`Such as a RAID storage system.
`Accordingly, it would be desirable to provide a method
`and system for caching information transmitted using, a
`computer network, which is not subject to additional delay
`or restricted functionality from having to use a local oper
`ating system and local file system or file server. This
`advantage is achieved in an embodiment of the invention in
`which a cache engine coupled to the network provides a
`cache of transmitted objects, which it stores in memory and
`mass storage by taking direct control of when and where to
`store those objects in mass storage. The cache engine may
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`Netflix, Inc. - Ex. 1001, Page 000007
`
`

`

`Caching Network Objects
`FIG. 1 shows a block diagram of a network object cache
`engine in a computer network.
`Acache engine 100 is coupled to a computer network 110,
`so that the cache engine 100 can receive messages from a set
`of devices 111 also coupled to the network 110.
`In a preferred embodiment, the network 110 includes a
`plurality of Such devices 111, interconnected using a com
`munication medium 112. For example, where the network
`110 includes a LAN (local area network), the communica
`tion medium 112 may comprise ethernet cabling, fiber optic
`coupling, or other media. The network 110 preferably
`includes a network of networks, sometimes called an “inter
`net' or an “intranet.”
`In a preferred embodiment, the devices 111 coupled to the
`network 110 communicate with the cache engine 100 using
`one or more protocols for communication, such as HTTP
`(hypertext transfer protocol) or one of its variants, FTP (file
`transfer protocol), or other protocols.
`The cache engine 100 includes a processor 101 and a
`cache 102. In a preferred embodiment, the processor 101
`comprises a general purpose processor operating under
`software control to perform the methods described herein
`and to construct and use the data structures described herein;
`as used herein, when the cache engine 100 performs par
`ticular tasks or maintains particular data structures that
`reference includes condign operation by the processor 101
`under control of Software maintained in a program and data
`memory 103.
`The cache 102 includes the program and data memory
`103 and a mass storage 104. In a preferred embodiment, the
`mass storage 104 includes a plurality of disk drives such as
`magnetic disk drives, but may alternatively include optical
`or magneto-optical disk drives. As used herein, references to
`“disk” and “disk drives' refer to the mass storage 104 and
`its individual drives, even if the mass storage 104 and its
`individual drives do not include physical disk-shaped ele
`ments. The cache engine 100 is coupled to the network 110
`and can receive and transmit a set of protocol messages 113
`according to the one or more protocols with which the
`devices 111 communicate with the cache engine 100.
`The cache engine 100 maintains a set of network objects
`114 in the cache 102. The cache engine 100 receives
`protocol messages 113 from a set of “client devices 111 to
`request network objects 114 to be retrieved from a set of
`45
`“server devices 111. In response thereto, the cache engine
`100 issues protocol messages 113 to request those network
`objects 114 from one or more server devices 111, receives
`those network objects 114 and stores them in the cache 102.
`and transmits those network objects 114 to the requesting
`client devices 111.
`As used herein, the terms "client' and “server” refer to a
`relationship between the client or server and the cache
`engine 100, not necessarily to particular physical devices
`111. As used herein, one "client device' 111 or one 'server
`device' 111 can comprise any of the following: (a) a single
`physical device 111 executing software which bears a client
`or server relationship to the cache engine 100; (b) a portion
`of a physical device 111, Such as a software process or set
`of software processes executing on one hardware device 111,
`which portion of the physical device 111 bears a client or
`server relationship to the cache engine 100; or (c) a plurality
`of physical devices 111, or portions thereof, cooperating to
`form a logical entity which bears a client or server relation
`ship to the cache engine 100. The phrases "client device'
`and “server device' refer to such logical entities and not
`necessarily to particular individual physical devices 111.
`
`55
`
`25
`
`30
`
`35
`
`40
`
`50
`
`60
`
`65
`
`US 7,103,794 B2
`
`3
`
`10
`
`15
`
`4
`The cache engine 100 preserves the network objects 114
`in the cache 102, and reuses those network objects 114 by
`continuing to serve them to client devices 111 which request
`them. When the cache 102 becomes sufficiently full, the
`cache engine 100 removes network objects 114 from the
`cache 102. For example, the cache engine 100 can remove
`objects as described herein in the section “Removing
`Objects from Cache.'
`In a preferred embodiment, the cache engine 100 uses the
`memory 103 as a cache for those network objects 114
`maintained using the mass storage 104, while using the
`combined memory 103 and mass storage 104 as the cache
`102 for those network objects 114 available on the network
`110.
`The cache 102 is not a file storage system, and network
`objects 114 which are stored in the cache 102 may be
`removed automatically from the cache 102 at any time by
`the cache engine 100. All network objects 114 and all other
`data maintained by the cache 102 is transient, except for a
`very small number of system objects which are required for
`operation, and those system objects are redundantly main
`tained on the mass storage 104 So as preserve those system
`objects against possible loss of a part of the mass storage 104
`(such as loss of one or more disk drives). Thus the cache
`engine 100 need not guarantee that network objects 114
`which are stored in the cache 102 will be available at any
`particular time after they are stored, and failure or even
`intentional removal of portions of the cache 102 (such as
`portions of the mass storage 104) cannot cause failure of the
`cache engine 100. Similarly, recovery or intentional addition
`of additional mass storage 104 (such as "hot Swapping of
`disk drives) is smoothly integrated into the cache 102
`without interruption of operation of the cache engine 100.
`Moreover, the cache engine 100 operates exclusively to
`perform the operation of caching the network objects 114.
`There is no separate “operating system, no user, and there
`are no user application programs which execute indepen
`dently on the processor 101. Within the memory 103, there
`are no separate memory spaces for “user” and "operating
`system.” The cache engine 100 itself maintains the cache
`102 of the network objects 114 and selects the network
`objects 114 for retention in the cache 102 or removal from
`the cache 102, operating so as to (1) localize writing the
`network objects 114 to the mass storage 104, (2) localize
`deletion of the network objects 114 from the mass storage
`104, and (3) efficiently replace the network objects 114 in the
`cache 102 with new network objects 114. In a preferred
`embodiment, the cache engine 100 performs these opera
`tions efficiently while operating the cache 102 relatively
`filled with network objects 114.
`In a preferred embodiment, the cache engine 100 main
`tains statistics regarding access to the cache 102. These
`statistics can include the following:
`a set of hit rates for the cache 102, including (1) a hit rate
`for network objects 114 found in the cache 102 versus
`those which must be retrieved from server devices 111,
`and (2) a hit rate for network objects 114 found in the
`memory 103 versus those which must be retrieved from
`the mass storage 104;
`a set of statistics for operations on the memory 103.
`including (1) the number of network objects 114 which
`are maintained in the memory 103, and (2) the fraction
`of memory 103 which is devoted to caching network
`objects 114 versus storing system objects or unallo
`cated; and
`a set of statistics for operations on the mass storage 104.
`including (1) the number of read operations from the
`
`Netflix, Inc. - Ex. 1001, Page 000008
`
`

`

`US 7,103,794 B2
`
`5
`
`10
`
`15
`
`30
`
`40
`
`25
`
`5
`mass storage 104, (2) the number of write operations to
`the mass storage 104, including the number of “write
`episodes' as described herein, and (3) the fraction of
`the mass storage 104 which is devoted to caching
`network objects 114 versus storing system objects or
`unallocated.
`The cache engine 100 can also maintain statistics which
`are combinations or variants of the above.
`Using the Cache Engine
`There are numerous circumstances in which the cache
`engine 100 can provide improved performance or additional
`functionality in the network 110. For example, the cache
`engine 100 can be used as a proxy cache (whether to provide
`a firewall, to provide a cache for client devices 111 coupled
`to a local area network, or otherwise), as a reverse proxy
`cache, as a cache for requests made by users of a single ISP.
`as a cache for “push’ protocols, or as an accelerator or server
`cache.
`The cache engine 100 provides the client devices 111 with
`relatively quicker access to network objects 114 otherwise
`available directly from the server devices 111. Typically the
`client devices 111 request those network objects 114 from
`the cache engine 100, which either transmits them to the
`client devices 111 from the cache 102 or obtains them from
`the server devices 111 and then transmits them to the client
`devices 111.
`The cache engine 100 can exercise more intelligence and
`proactivity than simply waiting for documents to be
`requested by the client devices 111:
`The cache engine 100 can be configured preloaded with
`selected network objects 114 which are expected to be
`requested by the client devices 111. For example,
`certain network objects 114 are known to be commonly
`requested by client devices 111 throughout the network
`110 known as the internet; these network objects 114
`can be preloaded in the cache engine 100 upon manu
`35
`facture. These network objects 114 could include home
`pages for well-known companies (such as Netscape)
`and well-known search engines (such as Digital’s “Alta
`Vista').
`The cache engine 100 can periodically request network
`objects 114 responsive to a set of statistics regarding
`commonly requested network objects 114. For
`example, information regarding commonly requested
`network objects 114 can be maintained on a server
`device 111; the cache engine 100 can request this
`information from the server device 111 and periodically
`request those network objects 114 for storage in the
`cache 102. In a preferred embodiment, the cache engine
`100 can perform this operation periodically when client
`devices 111 are not actively using the cache engine 100,
`Such as relatively unloaded times in the late night or
`early morning.
`The cache engine 100 can periodically request network
`objects 114 responsive to a set of user preferences at the
`client devices 111. For example, the cache engine 100
`55
`can receive (either upon request or otherwise) a set of
`bookmarks from the client devices 111 and can request
`those network objects 114 from the server devices 111.
`In a preferred embodiment, the cache engine 100 can
`request those network objects 114 which have changed
`in a selected time period Such as one day.
`The cache engine 100 can provide a mirror site to one or
`more server devices 111, by periodically, or upon
`request, receiving network objects 114 from the server
`devices 111 to be delivered by the server device 111 to
`client devices 111 which have changed in a selected
`time period such as one day.
`
`45
`
`50
`
`60
`
`65
`
`6
`The cache engine 100 can provide an accelerator for one
`or more server devices is 111, by receiving requests to
`the server devices 111 which are distributed among a
`plurality of cache engines 100. Each cache engine 100
`maintains its cache 102 with network objects 114 to be
`delivered by the server device 111 to client devices 111.
`Service by the server device 111 is thus accelerated,
`because each cache engine 100 can respond to some of
`the load of requests for information, while limiting the
`number of requests for information which are passed
`through and must be handled by the server device 111
`itself.
`The cache engine 100 can provide a first type of push
`protocol assist to one or more server devices 111, by
`transmitting network objects 114 to one or more client
`devices 111 or proxy caches using a push protocol. For
`example, when the server devices 111 provide a net
`work broadcast service, the cache engine 100 can
`receive network objects 114 from the server devices
`111 to be broadcast to a subset of the network 110 and
`can independently broadcast those network objects 114.
`The cache engine 100 can provide a second type of push
`protocol assist to one or more server devices 111, by
`allowing those server devices 111 to broadcast network
`objects 114 to a plurality of cache engines 100. Each
`cache engine 100 can make the broadcast network
`objects 114 available to client devices 111 which
`request those network objects 114 from the cache
`engine 100 as if the cache engine 100 were the server
`device 111 for those network objects 114.
`The network objects 114 can include data, such as HTML
`pages, text, graphics, photographs, audio, Video; programs,
`such as Java or ActiveX applets or applications; or other
`types of network objects. Such as push protocol objects. The
`cache engine 100 can record frames of streaming audio or
`streaming video information in the cache 102, for delayed
`use by a plurality of client devices 111. Some types of known
`network objects 114 are not cached, such as CGI output or
`items marked noncachable by the server device 111.
`In a preferred embodiment, the cache engine 100 can
`glean knowledge about the client devices 111 from the
`protocol messages 113 or by other means, such as interro
`gating routing devices in the network 110, and can react in
`response to that information to provide differing network
`objects 114 to differing client devices 111. For example, the
`cache engine 100 can select server devices 111 for proximity
`or content in response to information about client devices
`111, as follows:
`The cache engine 100 can select a particular server device
`111 for rapid response. Such as for network routing
`proximity or for spreading service load over a plurality
`of server devices 111.
`The cache engine 100 can select content at the server
`device 111 in response to information about the client
`device 111, Such as tailoring the language of the
`response (such as serving pages in the English language
`or the French language), or Such as tailoring local
`information (such as advertising, news, or weather). In
`a preferred embodiment, local information Such as
`advertising can be retrieved from a local server d

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket