throbber
(19) United States
`(12) Patent Application Publication (10) Pub. No.: US 2006/0080510 A1
`(43) Pub. Date:
`Apr. 13, 2006
`Benhase et al.
`
`US 20060O80510A1
`
`(54)
`
`(76)
`
`(21)
`(22)
`
`APPARATUS AND METHOD TO MANAGE A
`DATA CACHE
`
`Inventors: Michael T. Benhase, Tucson, AZ (US);
`Binny S. Gill, San Jose, CA (US);
`Thomas C. Jarvis, Tucson, AZ (US);
`Dharmendra S. Modha, San Jose, CA
`(US)
`Correspondence Address:
`DALE E REGELMIAN
`4231 S. FREMONT AVENUE
`TUCSON, AZ 85714 (US)
`Appl. No.:
`10/964,474
`
`Filed:
`
`Oct. 12, 2004
`
`Publication Classification
`
`(51)
`
`Int. C.
`G06F 12/00
`
`(2006.01)
`
`(52) U.S. Cl. .............................................................. 711A136
`
`(57)
`
`ABSTRACT
`
`A method is disclosed to manage a data cache. The method
`provides a data cache comprising a plurality of tracks, where
`each track comprises one or more segments. The method
`further maintains a first LRU list comprising one or more
`first tracks having a low reuse potential, maintains a second
`LRU list comprising one or more second tracks having a
`high reuse potential, and sets a target size for the first LRU
`list. The method then accesses a track, and determines if that
`accessed track comprises a first track. If the method deter
`mines that the accessed track comprises a first track, then the
`method increases the target size for said first LRU list.
`Alternatively, if the method determines that the accessed
`track comprises a second track, then the method decreases
`the target size for said first LRU list. The method demotes
`tracks from the first LRU list if its size exceeds the target
`size; otherwise, the method evicts tracks from the second
`LRU list.
`
`Provide data cache
`comprising aplurality of
`tracks
`
`O
`
`
`
`
`
`Maintain a first LRU list
`comprising first tracks having low
`reuse potential
`
`22-9
`
`
`
`Associate a sequence number with
`each of the first tracks
`
`
`
`
`
`
`
`
`
`Deine a bottonportion of the
`first LRUlist
`
`Maintain a second LRU list
`comprising second tracks having
`high reuse potential
`
`Associate a sequence number with
`each of the second tracks
`
`Define a bottom portion of the
`Second LRU list
`
`HPE, Exh. 1026, p. 1
`
`

`

`Patent Application Publication Apr. 13,2006 Sheet 1 of 6
`
`US 2006/0080510 Al
`
`LOL
`
`LOls
`
`¥bb|ohh
`
`TtO}
`
`
`
`——efee
`
`LOL
`
`HPE, Exh. 1026, p. 2
`
`HPE, Exh. 1026, p. 2
`
`
`
`

`

`Patent Application Publication Apr. 13, 2006 Sheet 2 of 6
`
`US 2006/008051.0 A1
`
`
`
`Provide data cache
`Comprising aplurality of
`tracks
`
`
`
`
`
`O
`
`Maintain a first LRU list
`comprising first tracks having low-2-
`reuse potential
`
`O
`
`Associate a sequence number with
`a
`each of the first tracks
`
`3.
`
`Define a bottonportion of the
`first LRU list
`
`O
`
`2
`
`Maintain a second LRU list
`comprising second tracks having 25°
`high reuse potential
`
`(2 O
`Associate a sequence number with
`each of the second tracks
`-2
`
`Define a bottom portion of the
`Second LRU list
`
`70
`
`39
`
`HPE, Exh. 1026, p. 3
`
`

`

`Patent Application Publication Apr. 13, 2006 Sheet 3 of 6
`
`US 2006/008051.0 A1
`
`13.
`6.
`
`Set first LRU list target size
`
`
`
`Establish a direction parameter 22°
`having a value of 0
`
`
`
`Access a track resident in the data afs
`cache
`
`fe accessed track in
`bottom portion of the first
`LRU list
`
`YES
`
`Set direction paranueter to +1
`
`LRU list
`
`
`
`
`
`NO
`Leave direction parameter
`unchanged
`
`2O
`
`
`
`
`
`
`
`
`
`
`
`
`
`HPE, Exh. 1026, p. 4
`
`

`

`Patent Application Publication Apr. 13, 2006 Sheet 4 of 6
`
`US 2006/0080510 A1
`
`Elect to demote one or
`more tracks from the
`cache
`
`
`
`
`
`
`
`uds
`
`att
`
`Determine actual size of first r: -L- O
`list
`
`
`
`O
`LS
`YES
`Demote track from bottémportion
`of first LRU list
`
`
`
`o
`
`
`
`
`
`
`
`l size of first LR
`greater than the first LRU
`list target size
`f
`
`al-O
`Demote track from bottom portion
`of sccond LRU list
`
`
`
`
`
`
`
`
`
`
`
`
`
`Determine the denoted track
`comprises (n) segments
`
`g
`
`
`
`Set target size adjustment equal to ultip
`(n) (direction parameter)
`
`et first LRU list target size equa
`to the sum of the existing 'arge
`ze plus the target size adjustme
`
`
`
`O
`
`HPE, Exh. 1026, p. 5
`
`

`

`Patent Application Publication Apr. 13, 2006 Sheet 5 of 6
`
`US 2006/008051.0 A1
`
`stablish a first list Size
`equal to the number of
`tracks on the first LRU
`li
`
`
`
`
`
`
`
`
`
`
`
`Establish a
`first sampling region size which
`is less than the maximum number
`of segments that the data Cache
`can hold
`
`Set
`accessed firsttrack sequenceNbr
`to equal the sequence number
`associated with the accessed first
`track
`
`
`
`
`
`s
`ccessed firstrack Scquer
`R - (ru first sequenceNbF)
`less than or equal to
`(first sampling region-sized
`first list Size) x
`(mru first sequenceNbr
`ru first sequence.Nbf)
`
`
`
`NO
`
`
`
`NO
`
`The accessed track is in the
`bottom portion of the first LRU
`list
`
`
`
`The accessed track is not in the
`bottom portion of the first LRU
`list
`
`HPE, Exh. 1026, p. 6
`
`

`

`Patent Application Publication Apr. 13, 2006 Sheet 6 of 6
`
`siscond list Size equal to
`the number of tracks on
`
`US 2006/008051.0 A1
`(0
`
`
`
`second sampling region size
`which is loss than the maximum
`number of segments the data cache
`can hold
`
`Set
`accessed secondtrack sequenceNbr
`to equal the sequence number
`associated with the accessed second
`track
`
`
`
`
`
`in second sequenceNbf)
`less than or equal to
`(second sampling region-size
`lsecond list Size) X
`ru second sequenceNbr
`second sequenceNb
`
`NO
`
`
`
`
`
`NO
`
`The accessed track is in the
`bottom portion of the second LRU)
`list
`
`
`
`The accessed track is not in the
`bottom portion of the second LRU
`list
`
`HPE, Exh. 1026, p. 7
`
`

`

`US 2006/0O80510 A1
`
`Apr. 13, 2006
`
`APPARATUS AND METHOD TO MANAGE A DATA
`CACHE
`
`FIELD OF THE INVENTION
`0001. This invention relates to an apparatus and method
`to manage a data cache.
`
`BACKGROUND OF THE INVENTION
`0002 Data storage and retrieval systems are used to store
`information provided by one or more host computer sys
`tems. Such data storage and retrieval systems receive
`requests to write information to one or more secondary
`storage devices, and requests to retrieve information from
`those one or more secondary storage devices. Upon receipt
`of a write request, the system stores information received
`from a host computer in a data cache. In certain implemen
`tations, a copy of that information is also stored in a
`nonvolatile storage device. Upon receipt of a read request,
`the system recalls one or more tracks from the one or more
`secondary storage devices and moves those tracks to the data
`cache.
`0003 Thus, the system is continuously moving informa
`tion to and from Storage devices, and to and from the data
`cache. One or more device adapters interconnect the data
`cache and the information storage devices. What is needed
`is an apparatus and method to manage the tracks residing in
`the data cache Such that tracks having a low reuse potential
`are preferentially demoted from the cache while tracks
`having a high reuse potential are preferentially kept in the
`cache.
`
`SUMMARY OF THE INVENTION
`0004 Applicants invention includes a method to manage
`a data cache. The method provides a data cache comprising
`a plurality of tracks, where each track comprises one or more
`segments. The method further maintains a first LRU list
`comprising one or more first tracks having a low reuse
`potential, maintains a second LRU list comprising one or
`more second tracks having a high reuse potential, and sets a
`target size for the first LRU list.
`0005 The method then accesses a track, and determines
`if that accessed track comprises a first track. If the method
`determines that the accessed track comprises a first track,
`then the method increases the target size for said first LRU
`list. Alternatively, if the method determines that the accessed
`track comprises a second track, then the method decreases
`the target size for said first LRU list.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0006. The invention will be better understood from a
`reading of the following detailed description taken in con
`junction with the drawings in which like reference designa
`tors are used to designate like elements, and in which:
`0007 FIG. 1 is a block diagram showing the components
`of Applicants’ data storage and retrieval system;
`0008 FIG. 2 is a flow chart summarizing certain initial
`steps in Applicants method;
`0009 FIG. 3 is a flow chart summarizing certain addi
`tional steps in Applicants method;
`
`0010 FIG. 4 is a flow chart summarizing certain addi
`tional steps in Applicants method;
`0011 FIG. 5 is a flow chart summarizing certain addi
`tional steps in Applicants method; and
`0012 FIG. 6 is a flow chart summarizing certain addi
`tional steps in Applicants method.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENTS
`0013 The invention will be described as embodied in an
`information storage and retrieval system which includes two
`clusters, a plurality of host adapters, a plurality of device
`adapters, and a data cache. The following description of
`Applicant's method to manage data in a data cache is not
`meant, however, to limit Applicants invention to data
`processing applications, as the invention herein can be
`applied to data caching in a wide variety of applications
`including, without limitation, storage systems, databases,
`Web servers, middleware, processors, file systems, disk
`drives, RAID controllers, operating systems, and the like.
`0014) Referring now to FIG. 1, information storage and
`retrieval system 100 is capable of communication with host
`computer 390 via communication link 395. The illustrated
`embodiment of FIG. 1 shows a single host computer. In
`other embodiments, Applicants information storage and
`retrieval system is capable of communicating with a plural
`ity of host computers.
`0015 Host computer 390 comprises a computer system,
`Such as a mainframe, personal computer, workstation, and
`combinations thereof, including an operating system Such as
`Windows, AIX, Unix, MVS, LINUX, etc. (Windows is a
`registered trademark of Microsoft Corporation: AIX is a
`registered trademark and MVS is a trademark of IBM
`Corporation; and UNIX is a registered trademark in the
`United States and other countries licensed exclusively
`through The Open Group; LINUX is a registered trademark
`owned by Linus Torvalds.) In certain embodiments, host
`computer 390 further includes a storage management pro
`gram. The storage management program in the host com
`puter 390 may include the functionality of storage manage
`ment type programs known in the art that manage the
`transfer of data to a data storage and retrieval system, Such
`as the IBM DFSMS implemented in the IBM MVS operat
`ing System.
`0016.
`In certain embodiments, Applicants information
`storage and retrieval system 100 includes a plurality of host
`adapters 102-105, 107-110, 112-115, and 117-120, disposed
`in four host bays 101, 106, 111, and 116. Each host adapter
`is connected to both Subsystems through one or more
`Common Platform Interconnect buses 121 and 150 such that
`each Subsystem can handle I/O from any host adapter.
`Internal buses in each Subsystem are connected via a Remote
`I/O bridge 155/165 between the processor portions 130/140
`and I/O portions 160/170, respectively.
`0017. In other embodiments, Applicants information
`storage and retrieval system includes fewer than 16 host
`adapters. Regardless of the number of host adapters dis
`posed in any embodiments of Applicants’ system, each of
`those host adapters comprises a shared resource that has
`equal access to both central processing/cache elements 130
`and 140. Each host adapter may comprise one or more Fibre
`
`HPE, Exh. 1026, p. 8
`
`

`

`US 2006/0O80510 A1
`
`Apr. 13, 2006
`
`Channel ports, one or more FICON ports, one or more
`ESCON ports, or one or more SCSI ports.
`0018 Processor portion 130 includes processor 132 and
`cache 134. In certain embodiments, processor portion 130
`further includes memory 133. In certain embodiments,
`memory device 133 comprises random access memory. In
`certain embodiments, memory device 133 comprises non
`Volatile memory.
`0.019
`Processor portion 140 includes processor 142 and
`cache 144. In certain embodiments, processor portion 140
`further includes memory 143. In certain embodiments,
`memory device 143 comprises random access memory. In
`certain embodiments, memory device 143 comprises non
`Volatile memory.
`0020 I/O portion 160 comprises a plurality of device
`adapters, such as device adapters 165, 166, 167, and 168. I/O
`portion 160 further comprises nonvolatile storage (“NVS)
`162 and battery backup 164 for NVS 162.
`0021. I/O portion 170 comprises a plurality of device
`adapters, such as device adapters 175, 176, 177, and 178. I/O
`portion 170 further comprises NVS 172 and battery backup
`174 for NVS 172.
`0022. In certain embodiments of Applicants’ system, one
`or more host adapters 101A, processor/cache portion 130,
`and one or more device adapters 161, are packaged together
`on a single card disposed in Applicants information storage
`and retrieval system. Similarly, in certain embodiments, one
`or more host adapters 101B, processor/cache portion 140,
`and one or more device adapters 171, are disposed on
`another card disposed in Applicants information storage
`and retrieval system. In these embodiments, Applicants
`system 100 includes two cards interconnected with a plu
`rality of data storage devices.
`0023. In the illustrated embodiment of FIG. 1, sixteen
`data storage devices are organized into two arrays, namely
`array 180 and array 190. The illustrated embodiment of FIG.
`1 shows two storage device arrays. In other embodiments,
`Applicants’ system includes a single storage device array. In
`still other embodiments, Applicants information storage
`and retrieval system includes more than two storage device
`arrays. Each storage array appears to a host computer as one
`or more logical devices.
`0024. In certain embodiments, arrays 180 and 190 utilize
`a RAID protocol. In certain embodiments, arrays 180 and
`190 comprise what is sometimes called a JBOD array, i.e.
`“Just a Bunch Of Disks where the array is not configured
`according to RAID. As those skilled in the art will appre
`ciate, a RAID (Redundant Array of Independent Disks) rank
`comprises independent disk drives configured in an array of
`disk drives to obtain performance, capacity and/or reliability
`that exceeds that of a single large drive.
`0.025
`Applicants invention includes a method to manage
`a data cache, such as data cache 134 (FIG. 1) and/or data
`cache 144 (FIG. 1). Based upon actual or anticipated host
`computer requests, Applicants information storage and
`retrieval system 100 moves tracks from the data cache to one
`or more RAID ranks, and from the one or more RAID ranks
`to the data cache.
`0026. A “stage operation' comprises moving one or more
`tracks from a RAID rank to the cache in response to a host
`
`request. For certain read operations, system 100 will “pre
`stage' information, i.e. anticipate a host request.
`0027 Applicants information storage and retrieval sys
`tem monitors previous access requests, and if more than six
`I/OS in sequence are detected, then Applicants method
`triggers sequential staging. In sequential staging, when
`about the middle of a staging group is read the next group
`starts to be staged, i.e. is "prestaged.” This procedure
`delivers the maximum sequential throughput with no delays
`waiting for data to be staged from a disk.
`0028 Data written to Applicants’ data storage and
`retrieval system by a host computer is first received by a host
`adapter, such as host adapter 102 (FIG. 1), and is transferred
`first to NVS, such as NVS 172 (FIG. 1). A copy of that data
`is held in the host adapter buffer. The host is notified that the
`I/O operation is complete as soon as the data is in NVS. The
`host adapter, once the NVS transfer is complete, then
`transfers the data to the cache. The data remains in the cache
`and NVS until it is “destaged.” In certain embodiments,
`destaging is triggered by cache and NVS usage thresholds.
`0029. A destage operation includes moving tracks from
`cache to a RAID rank. In a synchronous destaging operation,
`information is destaged to one or more RAID ranks con
`temporaneously with transferring that information to the
`data cache.
`0030. In an LRU destage operation, cache space is
`released according to Least Recently Used algorithms. As
`those skilled in the art will appreciate, a Least Recently Used
`algorithm determines when the data tracks residing in the
`cache were last accessed. In certain embodiments, such an
`LRU algorithm includes assigning a date stamp to each track
`indicating when that track was last accessed. Using LRU
`destaging, tracks having the earliest date stamp are prefer
`entially destaged.
`0031 Tracks that have been read sequentially are gener
`ally demoted quickly to release the used cache space
`because sequential data is rarely re-read within a short
`period of time. When destaging tracks, Applicants infor
`mation storage and retrieval system attempts to destage all
`the tracks that would make up a RAID stripe thereby
`minimizing the RAID-related activities in the device
`adapter.
`0032 Tracks that are brought into the cache in response
`to a request to read those tracks comprise tracks that have a
`high reuse potential. On the other hand, tracks that are
`prestaged in anticipation of a host request comprise tracks
`that have a low reuse potential. In addition, tracks that are
`staged/prestaged in the cache for one or more copy services
`comprise tracks that have a low reuse potential.
`0033) A publication entitled IBM TotalStorage Enterprise
`Storage Server Implementing ESS Copy Services with IBM
`eServer zSeries (hereinafter the “Red Book”), September
`2003, describes various copy services, and is hereby incor
`porated by reference herein. Such copy services include, for
`example, peer-to-peer remote copy, sometimes referred to as
`PPRC (Red Book Chapters 2 and 4), Peer-to-Peer Remote
`Copy Extended Distance (Red Book Chapter 3), Extended
`Remote Copy (Red Book Chapter 5), Flash Copy (Red Book
`Chapters 6 and 7), and Concurrent Copy (Red Book Chapter
`8).
`
`HPE, Exh. 1026, p. 9
`
`

`

`US 2006/0O80510 A1
`
`Apr. 13, 2006
`
`0034) Applicants invention includes a method to manage
`a data cache. Referring now to FIG. 2, in step 210 Appli
`cants method provides a data cache comprising a plurality
`of tracks. A "page' or a “segment comprises 4 kilobytes of
`data. Applicants information storage and retrieval system,
`such as system 100 (FIG. 1), manages data in “tracks. A
`track comprises a set of 4 KB segments. In certain embodi
`ments, a track may include as many as 16 consecutive
`segments. At any time, some or all of those 16 segments may
`be present in the cache.
`0035) In step 220, Applicants method maintains a first
`LRU list, where that first LRU list includes tracks having
`low reuse potential, i.e. first tracks. By “tracks having a low
`reuse potential. Applicants mean tracks prestaged in antici
`pation of a host request, tracks that are known to be a
`sequential access, and tracks that are brought into the cache
`for copy services.
`0036). In step 230, Applicants method associates a first
`sequence number with each of the first tracks. In certain
`embodiments, the first sequence number associated with a
`first track is based upon the later of the time that the first
`track was brought into the cache, or last accessed from the
`cache, such that tracks that were recently accessed have a
`larger sequence number than tracks that were not recently
`accessed. In certain embodiments, Applicants method
`maintains a sequence number, i.e. the currentSequenceNbr.
`that is incremented every second. In other embodiments of
`Applicants method, that currentSequenceNbr is incre
`mented every millisecond.
`0037. The first track that was most recently used is
`assigned the largest first sequence number, mru first se
`quenceNbr and the first track that was least recently used is
`assigned the Smallest first sequence number, liru first se
`quenceNbr. In certain embodiments, when a first track is
`placed on the first LRU list, or accessed from the cache, that
`first track is associated with the a sequence number deter
`mined by formula (1).
`CurrentSequenceNbr-mru first sequenceNbr-lru
`(1)
`first sequenceNbr 2
`0038. In step 240, Applicants method defines a bottom
`portion of the first LRU list. In certain embodiments, this
`bottom portion comprises about two percent (2%) of the
`number of segments comprising the cache. First tracks not
`included within the bottom portion of the first LRU list are
`included within the top portion of the first LRU list. As those
`skilled in the art will appreciate, each first track in the
`bottom portion of the first LRU list has a sequence number
`less than the sequence number associated with each first
`track not in the bottom portion.
`0039. In step 250, Applicants method maintains a second
`LRU list comprising tracks, that have a high reuse potential,
`i.e. second tracks. By “tracks that have a high reuse poten
`tial. Applicants mean tracks that are not prestaged in
`anticipation of a host request, tracks that are not known to
`be a sequential access, and tracks that are not brought into
`the cache for copy services.
`0040. In step 260, Applicants method associates a sec
`ond sequence number with each of the second tracks. In
`certain embodiments, the second sequence number associ
`ated with a second track is based upon the later of the time
`that the second track was brought into the cache, or last
`
`accessed from the cache, Such that tracks that were recently
`accessed have a larger sequence number than tracks that
`were not recently accessed.
`0041. The second track that was most recently used is
`assigned the largest first sequence number, mru second Se
`quenceNbr, and the second track that was least recently used
`is assigned the Smallest first sequence number, liru second
`sequenceNbr. In certain embodiments, when a second track
`is placed on the second LRU list, or accessed from the cache,
`that second track is associated with a sequence number equal
`to the CurrentSequenceNbr.
`0042. In step 270, Applicants method defines a bottom
`portion of the second LRU list. In certain embodiments, this
`bottom portion comprises about two percent (2%) of number
`of segments comprising the cache. Second tracks not
`included within the bottom portion of the second LRU list
`are included within the top portion of the second LRU list.
`As those skilled in the art will appreciate, each second track
`in the bottom portion of the second LRU list has a sequence
`number less than the sequence number associated with each
`second track not in the bottom portion.
`0043. Applicants method transitions from step 270 to
`step 310 (FIG. 3). In step 310, Applicants method sets a
`first LRU list target size. In certain embodiments, step 310
`is performed by a processor, such as processor 132 (FIG. 1),
`disposed in Applicants information storage and retrieval
`system, such as system 100 (FIG. 1). Step 310 includes
`determining if the sum of the number of segments compris
`ing the first LRU list and the second LRU list exceeds a
`predetermined number, i.e. exceeds a fixed predetermined
`number. In certain embodiments, that fixed predeter
`mined number is set to equal 90 percent of the maximum
`number of segments that the cache can hold.
`0044 Step 310 further includes determining if any tracks
`have been demoted from the cache. If Applicants method
`determines in step 310 that the aggregate number of seg
`ments comprising the first LRU list and the second LRU list
`exceed the fixed predetermined number, and if Applicants
`method further determines that no tracks have been demoted
`from the cache, then Applicants method sets the first LRU
`list target size to equal the number of segments comprising
`the first LRU list.
`0045. As is described in greater detail below, Applicants
`method autonomically adjusts this first LRU list target size
`based upon actual utilization of the tracks in the cache.
`When a first track in the bottom portion of the first LRU list
`is accessed from the cache, Applicants method increases the
`first LRU list target size. On the other hand, when a second
`track in the bottom portion of the second LRU list is
`accessed from the cache, Applicants method decreases the
`first LRU list target size.
`0046.
`In step 320, Applicants method establishes a direc
`tion parameter, and sets that direction parameter to 0. Step
`320 may be performed any time prior to executing step 340
`or step 360. In certain embodiments, step 320 is performed
`by a processor, such as processor 132 (FIG. 1), disposed in
`Applicants information storage and retrieval system, Such
`as system 100 (FIG. 1).
`0047. In step 325, a track is accessed from the cache.
`Applicants method transitions from step 325 to step 330
`wherein the method determines if the accessed track com
`
`HPE, Exh. 1026, p. 10
`
`

`

`US 2006/0O80510 A1
`
`Apr. 13, 2006
`
`prises a first track in the bottom portion of the first LRU list.
`In certain embodiments, step 330 is performed by a proces
`sor, such as processor 132 (FIG. 1), disposed in Applicants
`information storage and retrieval system, such as system 100
`(FIG. 1).
`0.048
`If Applicants method determines in step 330 that
`the accessed track of step 320 comprises a first track in the
`bottom portion of the first LRU list, then the method
`transitions from step 330 to step 340 wherein the method
`sets the direction parameter to a value of +1. In certain
`embodiments, step 340 is performed by a processor, such as
`processor 132 (FIG. 1), disposed in Applicants information
`storage and retrieval system, such as system 100 (FIG. 1).
`Applicants method transitions from step 340 to step 325
`and continues as described herein.
`0049. If Applicants method determines in step 330 that
`the accessed track of step 320 does not comprise a first track
`in the bottom portion of the first LRU list, then the method
`transitions from step 330 to step 350 wherein the method
`determines if the accessed track of step 320 comprises a
`second track in the bottom portion of the second LRU list.
`In certain embodiments, step 350 is performed by a proces
`sor, such as processor 132 (FIG. 1), disposed in Applicants
`information storage and retrieval system, such as system 100
`(FIG. 1).
`0050. If Applicants' method determines in step 350 that
`the accessed track of step 320 comprises a second track in
`the bottom portion of the second LRU list, then the method
`transitions from step 350 to step 360 wherein the method
`sets the direction parameter to a value of -1. In certain
`embodiments, step 360 is performed by a processor, such as
`processor 132 (FIG. 1), disposed in Applicants information
`storage and retrieval system, such as system 100 (FIG. 1).
`Applicants method transitions from step 360 to step 325
`and continues as described herein.
`0051) If Applicants' method determines in step 350 that
`the accessed track of step 320 does not comprise a second
`track in the bottom portion of the second LRU list, then the
`method transitions from step 350 to step 370 wherein the
`method leaves the direction parameter unchanged. Appli
`cants method transitions from step 370 to step 325 and
`continues as described herein.
`0.052
`Referring now to FIG. 4, in step 405 Applicants’
`method elects to demote one or more tracks from the cache.
`Applicants method transitions from step 405 to step 410
`wherein the method determines the actual size of the first
`LRU list. In certain embodiments, step 410 is performed by
`a processor, such as processor 132 (FIG. 1), disposed in
`Applicants information storage and retrieval system, Such
`as system 100 (FIG. 1).
`0053 Applicants method transitions from step 410 to
`step 420 wherein the method determines if the actual size of
`the first LRU list is greater than the first LRU list target size.
`In certain embodiments, step 420 is performed by a proces
`sor, such as processor 132 (FIG. 1), disposed in Applicants
`information storage and retrieval system, such as system 100
`(FIG. 1).
`0054 If Applicants method determines in step 420 that
`the actual size of the first LRU list is greater than the first
`LRU list target size, then the method transitions from step
`420 to step 430 wherein the method demotes one or more
`
`first tracks from the bottom portion of the first LRU list. In
`certain embodiments, step 430 is performed by a processor,
`such as processor 132 (FIG. 1), disposed in Applicants
`information storage and retrieval system, such as system 100
`(FIG. 1). Applicants method transitions from step 430 to
`step 450.
`0055. If Applicants method determines in step 420 that
`the actual size of the first LRU list is not greater than the first
`LRU list target size, then the method transitions from step
`420 to step 440 wherein the method demotes one or more
`second tracks from the bottom portion of the second LRU
`list. In certain embodiments, step 440 is performed by a
`processor, such as processor 132 (FIG. 1), disposed in
`Applicants information storage and retrieval system, Such
`as system 100 (FIG. 1).
`0056. Applicants method transitions from step 440 to
`step 450, wherein the method determines the number (n) of
`segments comprising the demoted one or more tracks of
`either step 430 or 440. In certain embodiments, step 450 is
`performed by a processor, such as processor 132 (FIG. 1),
`disposed in Applicants information storage and retrieval
`system, such as system 100 (FIG. 1).
`0057. Applicants method transitions from step 450 to
`step 460 wherein the method calculates a first LRU list target
`size adjustment, wherein that adjustment comprises the
`multiplication product of (n), i.e. the number of segments
`comprising the one or more demoted tracks, and the direc
`tion parameter set in step 340, or in step 360, or in step 370.
`In certain embodiments, step 460 is performed by a proces
`sor, such as processor 132 (FIG. 1), disposed in Applicants
`information storage and retrieval system, such as system 100
`(FIG. 1).
`0058 Applicants method transitions from step 460 to
`step 470 wherein the method calculates an adjusted first
`LRU list target size by adding the first LRU target size
`adjustment of step 460 to the existing first LRU list target
`size. In certain embodiments, step 470 is performed by a
`processor, such as processor 132 (FIG. 1), disposed in
`Applicants information storage and retrieval system, Such
`as system 100 (FIG. 1).
`0059. The following examples are presented to ftuther
`illustrate to persons skilled in the art how to make and use
`the invention. These examples are not intended as a limita
`tion, however, upon the scope of the invention, which is
`defined only by the appended claims.
`
`EXAMPLE 1.
`0060. As a first example, if Applicants method deter
`mines in step 330 that the track accessed in step 320
`comprises a first track in the bottom portion of the first LRU
`list, then Applicants method sets the direction parameter to
`+1 in step 340. If Applicants method demotes one or more
`tracks in step 430 or 440, and determines in step 450 that the
`demoted tracks comprise 10 segments, then in Step 460
`Applicants method calculates a target size adjustment of
`+10. Applicants method in step 470 increases the first LRU
`list target size by 10.
`
`EXAMPLE 2
`0061 As a second example, if Applicants method deter
`mines in step 350 that the track accessed in step 320
`
`HPE, Exh. 1026, p. 11
`
`

`

`US 2006/0O80510 A1
`
`Apr. 13, 2006
`
`comprises a second track in the bottom portion of the second
`LRU list, then Applicants method sets the direction param
`eter to -1 in step 360. If Applicants method demotes one or
`more tracks in step 430 or 440, and determines in step 450
`that the demoted tracks comprise 10 segments, then in step
`460 Applicants method calculates a target size adjustment
`of -10. Applicants method in step 470 decreases the first
`LRU list target size by 10.
`
`EXAMPLE 3
`0062. As a third example, if Applicants method deter
`mines that the track accessed in step 320 comprises neither
`a first track in the bottom portion of the first LRU list nor a
`second track in the bottom portion of the second LRU list,
`then Applicants method leaves the direction parameter
`unchanged in step 370. If Applicants method demotes one
`or more tracks in step 430 or 440, and determines in step 450
`that the demoted tracks comprise 10 segments, then in step
`460 Applicants method calculates a target size adj

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket