`Benhase et a1.
`
`(10) Patent N0.:
`(45) Date of Patent:
`
`US 7,260,679 B2
`Aug. 21, 2007
`
`US007260679B2
`
`(54) APPARATUS AND METHOD TO MANAGE A
`DATA CACHE USING A FIRST AND SECOND
`LEAST RECENTLY USED LIST
`
`(75) Inventors: Michael T- Benhase’ Tucson’ AZ (Us);
`B.
`s G.“ S
`J
`CA
`“my '
`l ’
`a,“ 056’
`
`’
`
`_
`
`6,347,363 B1* 2/2002 Arimilli et al. ........... .. 711/150
`6,457,102 B1* 9/2002 Lambright et a1.
`.. 711/129
`6,701,393 B1 *
`3/2004 Kemeny et al. ......... .. 710/40
`6,728,836 B1* 4/2004 Lambright et a1.
`711/129
`6,839,809 B1* 1/2005 Forster et al. ...... ..
`711/134
`6,898,672 B2* 5/2005 Lambrlght et a1.
`711/129
`6,996,676 B2 *
`2/2006 Megiddo et al. .... ..
`711/129
`
`2006/0069871 A1* 3/2006 Gill et al. ................. .. 711/118
`OTHER PUBLICATIONS
`
`]T)ll1l°mas (é- Jasrvllsi 131050181’ A? (USé’A
`armen ra .
`(US)
`
`0
`
`a, an ose,
`
`-
`_
`.
`.
`.
`(73) Asslgnee' lcnternatlgnal mnilssg/lg C51; es
`MP0“ ‘on’
`O ’
`(
`)
`
`IBM, “IBM TotalStorage Enterprise Storage Server: Implementing
`ESS Copy Servives With IBM eServer ZSeries”, Redbooks, SG24
`5680-05, Sep. 2003, Chapters 28.
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U'S'C~ 154(1)) by 385 days'
`
`(21) Appl. No.2 10/964,474
`
`(22) Filed:
`
`0a. 12, 2004
`
`(65)
`
`Prior Publication Data
`
`Us 2006/0080510 A1
`
`APT- 13, 2006
`
`(51) Int‘ Cl‘
`(200601)
`G06F 12/12
`(200601)
`G06F 13/00
`(52) US. Cl. ..................... .. 711/113; 711/134; 711/136;
`711/160
`(58) Field of Classi?cation Search -------------- -- 711/133,
`_
`_
`711/134,136_, 159, 160
`See aPPhCaUOn ?le for Complete Search hlstol'y-
`References Cited
`
`(56)
`
`U.S. PATENT DOCUMENTS
`
`5,305,389 A
`5,627,990 A
`5,778,430 A
`6,266,742 B1
`
`4/1994 Palmer
`5/1997 Cord et al.
`7/1998 Ish et al.
`7/2001 Challenger et al.
`
`* cited by examiner
`_
`_
`_
`Primary ExamlneriHong I'(1m
`(74) Attorney, Agent, or FlrmiChandler & Udall, LLC;
`Dale F- Regelman
`
`(57)
`
`ABSTRACT
`
`A method is disclosed to manage a data cache. The method
`provldes a data cache comprlslng a plurallty of tracks, Where
`each track comprises one or more segments. The method
`further maintains a ?rst LRU list comprising one or more
`?rst tracks having a loW reuse potential, maintains a second
`LRU list comprising one or more second tracks having a
`high reuse potential, and sets a target siZe for the ?rst LRU
`list' The method then accesses a track’ and determines ifthat
`accessed track comprises a ?rst track. If the method deter
`mines that the accessed track comprises a ?rst track, then the
`method increases the target siZe for said ?rst LRU list.
`Alternatively, if the method determines that the accessed
`track comprises a second track, then the method decreases
`the target siZe for said ?rst LRU list. The method demotes
`tracks from the ?rst LRU list if its siZe exceeds the target
`siZe; otherwise, the method evicts tracks from the second
`LRU list.
`
`42 Claims, 6 Drawing Sheets
`
`PROVIDE DATA CACHE
`COMPRISING A PLURALITY
`OF TRACKS
`
`210
`
`MAINTAIN A FIRST LRU LIST
`COMPRISING FIRST TRACKS
`~220
`HAVING LOW REUSE POTENTIAL
`
`ASSOCIATE A SEQUENCE
`NUMBER WITH EACH OF THE
`FIRST mic-Kg
`
`~ 230
`
`DEFINE A BOTTOM PORTION
`OF THE FIRST LRU LIST
`
`240
`
`MAINTAIN A SECOND LRU LIST
`COMPRISING SECOND TRACKS
`HAVING HIGH REUSE POTENTIAL
`
`ASSOCIATE A SEQUENCE
`NUMBER WITH EACH OF THE ~ 260
`SECOND TRACKS
`
`DEFINE A BOTTOM PORTION
`OF THE SECOND LRU LIST
`
`IS
`THE ACCESSED
`TRACK IN THE BOTTOM
`
`HPE, Exh. 1007, p. 1
`
`
`
`U.S. Patent
`U.S. Patent
`
`Aug. 21, 2007
`Aug. 21, 2007
`
`Sheet 1 6f 6
`Sheet 1 of 6
`
`US 7,260,679 B2
`US 7,260,679 B2
`
`ale
` Gi
`
`El
`mi
`
`ab0b
`
`
`Lea|Old
`
`G6E
`
`HPE, Exh. 1007, p. 2
`
`HPE, Exh. 1007, p. 2
`
`
`
`
`
`U.S. Patent
`
`Aug. 21, 2007
`
`Sheet 2 0f 6
`
`US 7,260,679 B2
`
`FIG. 2
`
`210
`
`PROVIDE DATA CACHE
`COMPRISING A PLURALITY
`OF TRACKS
`I
`MAINTAIN A FIRST LRU LIST
`COMPRISING FIRST TRACKS
`HAVING LOW REUSE POTENTIAL
`I
`ASSOCIATE A SEQUENCE
`NUMBER WITH EACH OF THE
`FIRST TRACKS
`I
`DEFINE A BOTTOM PORTION
`OF THE FIRST LRU LIST
`I
`MAINTAIN A SECOND LRU LIST
`COMPRISING SECOND TRACKS
`HAVING HIGH REUSE POTENTIAL
`I
`ASSOCIATE A SEQUENCE
`NUMBER WITH EACH OF THE
`SECOND TRACKS
`I
`DEFINE A BOTTOM PORTION
`OF THE SECOND LRU LIST
`
`W270
`
`HPE, Exh. 1007, p. 3
`
`
`
`U.S. Patent
`
`Aug. 21, 2007
`
`Sheet 3 6f 6
`
`US 7,260,679 B2
`
`FIG. 3
`
`SET FIRST LRU LIST TARGET SIZE ~310
`I
`ESTABLISH A DIRECTION PARA
`METER HAVING A VALUE OF 0
`I4
`ACCESS A TRACK RESIDENT IN
`THE DATA CACHE
`
`330
`
`THE ACCESSED
`TRACK IN THE BOTTOM
`PORTION OF THE FIRST
`LRU LIST
`(2
`
`350
`
`THE ACCESSED
`TRACK IN THE BOTTOM
`ORTION OF THE SECON -
`LRU LIST
`
`YES
`
`f 340
`SET DIRECTION
`PARAMETER TO +1
`
`A
`
`/ 36°
`SET DIRECTION
`PARAMETER TO -1
`
`LEAVE DIRECTION
`PARAMETER UNCHANGED
`
`~37O
`
`HPE, Exh. 1007, p. 4
`
`
`
`U.S. Patent
`
`Aug. 21, 2007
`
`Sheet 4 6f 6
`
`US 7,260,679 B2
`
`FIG. 4
`
`ELECT TO DEMOTE
`ONE OR MORE TRACKS
`FROM THE CACHE
`
`DETERMINE ACTUAL SIZE OF ~410
`FIRST LRU LIST
`
`420
`
`IS
`ACTUAL SIZE OF
`FIRST LRU LIST GREATER
`THAN THE FIRST LRU
`L'STSTIIZIEGET
`'2
`' NO
`
`DEMOTE TRACK FROM
`BOTTOM PORTION OF ~43o
`FIRST LRU LIST
`
`DEMOTE TRACK FROM BOTTOM ~440
`PORTION OF SECOND LRU LIST
`‘4
`DETERMINE THE DEMOTED TRACK N 450
`COMPRISES (n) SEGMENTS
`I
`SET TARGET SIZE ADJUSTMENT
`EQUAL To (n) * (DIRECTION ~460
`PARAMETER)
`I
`SET FIRST LRU LIST TARGET
`sIzE EQUAL TO THE SUM OF THE
`EXISTING TARGET SIZE PLUS THE
`TARGET SIZE ADJUSTMENT
`
`470
`
`HPE, Exh. 1007, p. 5
`
`
`
`U.S. Patent
`
`Aug. 21, 2007
`
`Sheet 5 6f 6
`
`US 7,260,679 B2
`
`5
`
`ESTABLISH A FIRST_LIST_SIZE
`EQUAL TO THE NUMBER OF TRACKS
`ON THE FIRST LRU LIST
`I
`ESTABLISH A FIRST_SAMPLING_REGION_SIZE
`WHICH IS LESS THAN THE MAXIMUM NUMBER OF ‘V520
`SEGMENTS THAT THE DATA CACHE CAN HOLD
`
`510
`
`530
`
`IS
`THE ACCESSED TRACK
`A FIRST TRACK
`?
`YES
`
`SET ACCESSED__FIRSTTRACK_SEQUENCENbr
`TO EQUAL THE SEQUENCE NUMBER ASSOCIATED ~540
`WITH THE ACCESSED FIRST TRACK
`
`'
`
`(ACCESSED_FIRSTTRACK_SEQUENCE NBR -
`Iru_FIRST_SEQUENCENbr) LESS THAN OR
`EQUAL T0 (FIRST_SAMPLIN(E“_REGION_SIZE,I
`FIRST_LIST_SIZE) X (mru_FIRST_SEQUENCENbr -
`Iru_FIRST_SEQUENCENbr)
`‘7
`
`THE ACCESSED TRACK IS
`IN THE BOTTOM PORTION OF
`THE FIRST LRU LIST
`
`560
`
`L
`THE ACCESSED TRACK IS NOT
`IN THE BOTTOM PORTION OF
`THE FIRST LRU LIST
`
`570
`
`HPE, Exh. 1007, p. 6
`
`
`
`U.S. Patent
`
`Aug. 21, 2007
`
`Sheet 6 6f 6
`
`US 7,260,679 B2
`
`FIG. 6
`
`ESTABLISH A SECOND_LIST_SIZE
`EQUAL TO THE NUMBER OF TRACKS
`ON THE SECOND LRU LIST
`I
`ESTABLISH A SECOND_SAMPLING__REGION_SIZE
`WHICH IS LESS THAN THE MAXIMUM NUMBER OF "V620
`SEGMENTS THAT THE DATA CACHE CAN HOLD
`
`610
`
`630
`
`IS
`THE ACCESSED TRACK
`A SECOND TRACK
`?
`YES
`
`SET ACCESSED_SECONDTRACK_SEQUENCENbr
`TO EQUAL THE SEQUENCE NUMBER ASSOCIATED ~ 640
`WITH THE ACCESSED SECOND TRACK
`
`(ACCESSED_SECONDTRACK_SEQUENCE
`NBR - Iru_SECOND_SEQUENCENbr) LESS THAN OR
`EQUAL TO (SECOND_SAMPLING_REGION_SIZE/
`SECOND__L|ST_SIZE) x (mru_SECOND_SEQUENCENbr -
`|ru_SECOND_SEQUENCENbr)
`7
`
`YES
`
`S
`THE ACCESSED TRACK I
`IN THE BOTTOM PORTION OF
`THE SECOND LRU LIST
`
`1
`THE ACCESSED TRACK IS NOT
`IN THE BOTTOM PORTION OF
`THE SECOND LRU LIST
`
`660
`
`570
`
`HPE, Exh. 1007, p. 7
`
`
`
`US 7,260,679 B2
`
`1
`APPARATUS AND METHOD TO MANAGE A
`DATA CACHE USING A FIRST AND SECOND
`LEAST RECENTLY USED LIST
`
`FIELD OF THE INVENTION
`
`2
`FIG. 6 is a How chart summarizing certain additional steps
`in Applicants’ method.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENTS
`
`This invention relates to an apparatus and method to
`manage a data cache.
`
`BACKGROUND OF THE INVENTION
`
`Data storage and retrieval systems are used to store
`information provided by one or more host computer sys
`tems. Such data storage and retrieval systems receive
`requests to Write information to one or more secondary
`storage devices, and requests to retrieve information from
`those one or more secondary storage devices. Upon receipt
`of a Write request, the system stores information received
`from a host computer in a data cache. In certain implemen
`tations, a copy of that information is also stored in a
`nonvolatile storage device. Upon receipt of a read request,
`the system recalls one or more tracks from the one or more
`secondary storage devices and moves those tracks to the data
`cache.
`Thus, the system is continuously moving information to
`and from storage devices, and to and from the data cache.
`One or more device adapters interconnect the data cache and
`the information storage devices. What is needed is an
`apparatus and method to manage the tracks residing in the
`data cache such that tracks having a loW reuse potential are
`preferentially demoted from the cache While tracks having a
`high reuse potential are preferentially kept in the cache.
`
`SUMMARY OF THE INVENTION
`
`Applicants’ invention includes a method to manage a data
`cache. The method provides a data cache comprising a
`plurality of tracks, Where each track comprises one or more
`segments. The method further maintains a ?rst LRU list
`comprising one or more ?rst tracks having a loW reuse
`potential, maintains a second LRU list comprising one or
`more second tracks having a high reuse potential, and sets a
`target size for the ?rst LRU list.
`The method then accesses a track, and determines if that
`accessed track comprises a ?rst track. If the method deter
`mines that the accessed track comprises a ?rst track, then the
`method increases the target size for said ?rst LRU list.
`Alternatively, if the method determines that the accessed
`track comprises a second track, then the method decreases
`the target size for said ?rst LRU list.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The invention Will be better understood from a reading of
`the folloWing detailed description taken in conjunction With
`the draWings in Which like reference designators are used to
`designate like elements, and in Which:
`FIG. 1 is a block diagram shoWing the components of
`Applicants’ data storage and retrieval system;
`FIG. 2 is a How chart summarizing certain initial steps in
`Applicants’ method;
`FIG. 3 is a How chart summarizing certain additional steps
`in Applicants’ method;
`FIG. 4 is a How chart summarizing certain additional steps
`in Applicants’ method;
`FIG. 5 is a How chart summarizing certain additional steps
`in Applicants’ method; and
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`The invention Will be described as embodied in an infor
`mation storage and retrieval system Which includes tWo
`clusters, a plurality of host adapters, a plurality of device
`adapters, and a data cache. The folloWing description of
`Applicant’s method to manage data in a data cache is not
`meant, hoWever, to limit Applicant’s invention to data
`processing applications, as the invention herein can be
`applied to data caching in a Wide variety of applications
`including, Without limitation, storage systems, databases,
`Web servers, middleWare, processors, ?le systems, disk
`drives, RAID controllers, operating systems, and the like.
`Referring noW to FIG. 1, information storage and retrieval
`system 100 is capable of communication With host computer
`390 via communication link 395. The illustrated embodi
`ment of FIG. 1 shoWs a single host computer. In other
`embodiments, Applicants’ information storage and retrieval
`system is capable of communicating With a plurality of host
`computers.
`Host computer 390 comprises a computer system, such as
`a mainframe, personal computer, Workstation, and combi
`nations thereof, including an operating system such as
`WindoWs, AIX, Unix, MVS, LINUX, etc. (Windows is a
`registered trademark of Microsoft Corporation; AIX is a
`registered trademark and MVS is a trademark of IBM
`Corporation; and UNIX is a registered trademark in the
`United States and other countries licensed exclusively
`through The Open Group; LINUX is a registered trademark
`oWned by Linus Torvalds.) In certain embodiments, host
`computer 390 further includes a storage management pro
`gram. The storage management program in the host com
`puter 390 may include the functionality of storage manage
`ment type programs knoWn in the art that manage the
`transfer of data to a data storage and retrieval system, such
`as the IBM DFSMS implemented in the IBM MVS operat
`ing system.
`In certain embodiments, Applicants’ information storage
`and retrieval system 100 includes a plurality of host adapters
`102-105, 107-110, 112-115, and 117-120, disposed in four
`host bays 101, 106, 111, and 116. Each host adapter is
`connected to both subsystems through one or more Common
`Platform Interconnect buses 121 and 150 such that each
`subsystem can handle I/O from any host adapter. Internal
`buses in each subsystem are connected via a Remote I/O
`bridge 155/165 betWeen the processor portions 130/140 and
`I/O portions 160/170, respectively.
`In other embodiments, Applicants’ information storage
`and retrieval system includes feWer than 16 host adapters.
`Regardless of the number of host adapters disposed in any
`embodiments of Applicants’ system, each of those host
`adapters comprises a shared resource that has equal access
`to both central processing/cache elements 130 and 140. Each
`host adapter may comprise one or more Fibre Channel ports,
`one or more FICON ports, one or more ESCON ports, or one
`or more SCSI ports.
`Processor portion 130 includes processor 132 and cache
`134. In certain embodiments, processor portion 130 further
`includes memory 133. In certain embodiments, memory
`
`HPE, Exh. 1007, p. 8
`
`
`
`US 7,260,679 B2
`
`3
`device 133 comprises random access memory. In certain
`embodiments, memory device 133 comprises non-volatile
`memory.
`Processor portion 140 includes processor 142 and cache
`144. In certain embodiments, processor portion 140 further
`includes memory 143. In certain embodiments, memory
`device 143 comprises random access memory. In certain
`embodiments, memory device 143 comprises non-volatile
`memory.
`I/O portion 160 comprises a plurality of device adapters,
`such as device adapters 165, 166, 167, and 168. I/O portion
`160 further comprises nonvolatile storage (“NVS”) 162 and
`battery backup 164 for NVS 162.
`I/O portion 170 comprises a plurality of device adapters,
`such as device adapters 175, 176, 177, and 178. I/O portion
`170 further comprises NVS 172 and battery backup 174 for
`NVS 172.
`In certain embodiments of Applicants’ system, one or
`more host adapters 101A, processor/cache portion 130, and
`one or more device adapters 161, are packaged together on
`a single card disposed in Applicants’ information storage
`and retrieval system. Similarly, in certain embodiments, one
`or more host adapters 101B, processor/cache portion 140,
`and one or more device adapters 171, are disposed on
`another card disposed in Applicants’ information storage
`and retrieval system. In these embodiments, Applicants’
`system 100 includes tWo cards interconnected With a plu
`rality of data storage devices.
`In the illustrated embodiment of FIG. 1, sixteen data
`storage devices are organiZed into tWo arrays, namely array
`180 and array 190. The illustrated embodiment of FIG. 1
`shoWs tWo storage device arrays. In other embodiments,
`Applicants’ system includes a single storage device array. In
`still other embodiments, Applicants’ information storage
`and retrieval system includes more than tWo storage device
`arrays. Each storage array appears to a host computer as one
`or more logical devices.
`In certain embodiments, arrays 180 and 190 utiliZe a
`RAID protocol. In certain embodiments, arrays 180 and 190
`comprise What is sometimes called a JBOD array, i.e. “Just
`a Bunch Of Disks” Where the array is not con?gured
`according to RAID. As those skilled in the art Will appre
`ciate, a RAID (Redundant Array of Independent Disks) rank
`comprises independent disk drives con?gured in an array of
`disk drives to obtain performance, capacity and/or reliability
`that exceeds that of a single large drive.
`Applicants’ invention includes a method to manage a data
`cache, such as data cache 134 (FIG. 1) and/or data cache 144
`(FIG. 1). Based upon actual or anticipated host computer
`requests, Applicants’ information storage and retrieval sys
`tem 100 moves tracks from the data cache to one or more
`RAID ranks, and from the one or more RAID ranks to the
`data cache.
`A “stage operation” comprises moving one or more tracks
`from a RAID rank to the cache in response to a host request.
`For certain read operations, system 100 Will “prestage”
`information, i.e. anticipate a host request.
`Applicants’ information storage and retrieval system
`monitors previous access requests, and if more than six I/ Os
`in sequence are detected, then Applicants’ method triggers
`sequential staging. In sequential staging, When about the
`middle of a staging group is read the next group starts to be
`staged, i.e. is “prestaged.” This procedure delivers the maxi
`mum sequential throughput With no delays Waiting for data
`to be staged from a disk.
`Data Written to Applicants’ data storage and retrieval
`system by a host computer is ?rst received by a host adapter,
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`4
`such as host adapter 102 (FIG. 1), and is transferred ?rst to
`NVS, such as NVS 172 (FIG. 1). A copy of that data is held
`in the host adapter buffer. The host is noti?ed that the I/O
`operation is complete as soon as the data is in NVS. The host
`adapter, once the NVS transfer is complete, then transfers
`the data to the cache. The data remains in the cache and NVS
`until it is “destaged.” In certain embodiments, destaging is
`triggered by cache and NVS usage thresholds.
`A destage operation includes moving tracks from cache to
`a RAID rank. In a synchronous destaging operation, infor
`mation is destaged to one or more RAID ranks contempo
`raneously With transferring that information to the data
`cache.
`In an LRU destage operation, cache space is released
`according to Least Recently Used algorithms. As those
`skilled in the art Will appreciate, a Least Recently Used
`algorithm determines When the data tracks residing in the
`cache Were last accessed. In certain embodiments, such an
`LRU algorithm includes assigning a date stamp to each track
`indicating When that track Was last accessed. Using LRU
`destaging, tracks having the earliest date stamp are prefer
`entially destaged.
`Tracks that have been read sequentially are generally
`demoted quickly to release the used cache space because
`sequential data is rarely re-read Within a short period of time.
`When destaging tracks, Applicant’s information storage and
`retrieval system attempts to destage all the tracks that Would
`make up a RAID stripe thereby minimiZing the RAID
`related activities in the device adapter.
`Tracks that are brought into the cache in response to a
`request to read those tracks comprise tracks that have a high
`reuse potential. On the other hand, tracks that are prestaged
`in anticipation of a host request comprise tracks that have a
`loW reuse potential. In addition, tracks that are staged/
`prestaged in the cache for one or more copy services
`comprise tracks that have a loW reuse potential.
`A publication entitled IBM TotalStorage Enterprise Stor
`age Server Implementing ESS Copy Services With IBM
`eServer ZSeries (hereinafter the “Red Book”), September
`2003, describes various copy services, and is hereby incor
`porated by reference herein. Such copy services include, for
`example, peer-to-peer remote copy, sometimes referred to as
`PPRC (Red Book Chapters 2 and 4), Peer-to-Peer Remote
`Copy Extended Distance (Red Book Chapter 3), Extended
`Remote Copy (Red Book Chapter 5), Flash Copy (Red Book
`Chapters 6 and 7), and Concurrent Copy (Red Book Chapter
`8).
`Applicants’ invention includes a method to manage a data
`cache. Referring noW to FIG. 2, in step 210 Applicants’
`method provides a data cache comprising a plurality of
`tracks. A “page” or a “segment” comprises 4 kilobytes of
`data. Applicants’ information storage and retrieval system,
`such as system 100 (FIG. 1), manages data in “tracks.” A
`track comprises a set of 4 KB segments. In certain embodi
`ments, a track may include as many as 16 consecutive
`segments. At any time, some or all of those 16 segments may
`be present in the cache.
`In step 220, Applicants’ method maintains a ?rst LRU list,
`Where that ?rst LRU list includes tracks having loW reuse
`potential, i.e. ?rst tracks. By “tracks having a loW reuse
`potential,” Applicants mean tracks prestaged in anticipation
`of a host request, tracks that are knoWn to be a sequential
`access, and tracks that are brought into the cache for copy
`services.
`In step 230, Applicants’ method associates a ?rst
`sequence number With each of the ?rst tracks. In certain
`embodiments, the ?rst sequence number associated With a
`
`HPE, Exh. 1007, p. 9
`
`
`
`US 7,260,679 B2
`
`5
`?rst track is based upon the later of the time that the ?rst
`track Was brought into the cache, or last accessed from the
`cache, such that tracks that Were recently accessed have a
`larger sequence number than tracks that Were not recently
`accessed. In certain embodiments, Applicants’ method
`maintains a sequence number, i.e. the currentSequenceNbr,
`that is incremented every second. In other embodiments of
`Applicants’ method, that currentSequenceNbr is incre
`mented every millisecond.
`The ?rst track that Was most recently used is assigned the
`largest ?rst sequence number, mru_?rst_sequenceNbr, and
`the ?rst track that Was least recently used is assigned the
`smallest ?rst sequence number, lru_?rst_sequenceNbr. In
`certain embodiments, When a ?rst track is placed on the ?rst
`LRU list, or accessed from the cache, that ?rst track is
`associated With the a sequence number determined by for
`mula (l).
`CurrentSequenceNbr- [mruf?rstfsequenceNbr-lrui
`
`6
`Applicants’ information storage and retrieval system, such
`as system 100 (FIG. 1). Step 310 includes determining if the
`sum of the number of segments comprising the ?rst LRU list
`and the second LRU list exceeds a predetermined number,
`i.e. exceeds a ?xed_predetermined_number. In certain
`embodiments, that ?xed_predetermined_number is set to
`equal 90 percent of the maximum number of segments that
`the cache can hold.
`Step 310 further includes determining if any tracks have
`been demoted from the cache. If Applicants’ method deter
`mines in step 310 that the aggregate number of segments
`comprising the ?rst LRU list and the second LRU list exceed
`the ?xed_predetermined_number, and if Applicants’ method
`further determines that no tracks have been demoted from
`the cache, then Applicants’ method sets the ?rst LRU list
`target siZe to equal the number of segments comprising the
`?rst LRU list.
`As is described in greater detail beloW, Applicants’
`method autonomically adjusts this ?rst LRU list target siZe
`based upon actual utiliZation of the tracks in the cache.
`When a ?rst track in the bottom portion of the ?rst LRU list
`is accessed from the cache, Applicants’ method increases the
`?rst LRU list target siZe. On the other hand, When a second
`track in the bottom portion of the second LRU list is
`accessed from the cache, Applicants’ method decreases the
`?rst LRU list target siZe.
`In step 320, Applicants’ method establishes a direction
`parameter, and sets that direction parameter to 0. Step 320
`may be performed any time prior to executing step 340 or
`step 360. In certain embodiments, step 320 is performed by
`a processor, such as processor 132 (FIG. 1), disposed in
`Applicants’ information storage and retrieval system, such
`as system 100 (FIG. 1).
`In step 325, a track is accessed from the cache. Appli
`cants’ method transitions from step 325 to step 330 Wherein
`the method determines if the accessed track comprises a ?rst
`track in the bottom portion of the ?rst LRU list. In certain
`embodiments, step 330 is performed by a processor, such as
`processor 132 (FIG. 1), disposed in Applicants’ information
`storage and retrieval system, such as system 100 (FIG. 1).
`If Applicants’ method determines in step 330 that the
`accessed track of step 320 comprises a ?rst track in the
`bottom portion of the ?rst LRU list, then the method
`transitions from step 330 to step 340 Wherein the method
`sets the direction parameter to a value of +1. In certain
`embodiments, step 340 is performed by a processor, such as
`processor 132 (FIG. 1), disposed in Applicants’ information
`storage and retrieval system, such as system 100 (FIG. 1).
`Applicants’ method transitions from step 340 to step 325
`and continues as described herein.
`If Applicants’ method determines in step 330 that the
`accessed track of step 320 does not comprise a ?rst track in
`the bottom portion of the ?rst LRU list, then the method
`transitions from step 330 to step 350 Wherein the method
`determines if the accessed track of step 320 comprises a
`second track in the bottom portion of the second LRU list.
`In certain embodiments, step 350 is performed by a proces
`sor, such as processor 132 (FIG. 1), disposed in Applicants’
`information storage and retrieval system, such as system 100
`(FIG. 1).
`If Applicants’ method determines in step 350 that the
`accessed track of step 320 comprises a second track in the
`bottom portion of the second LRU list, then the method
`transitions from step 350 to step 360 Wherein the method
`sets the direction parameter to a value of —l. In certain
`embodiments, step 360 is performed by a processor, such as
`processor 132 (FIG. 1), disposed in Applicants’ information
`storage and retrieval system, such as system 100 (FIG. 1).
`
`20
`
`25
`
`35
`
`In step 240, Applicants’ method de?nes a bottom portion
`of the ?rst LRU list. In certain embodiments, this bottom
`portion comprises about tWo percent (2%) of the number of
`segments comprising the cache. First tracks not included
`Within the bottom portion of the ?rst LRU list are included
`Within the top portion of the ?rst LRU list. As those skilled
`in the art Will appreciate, each ?rst track in the bottom
`portion of the ?rst LRU list has a sequence number less than
`the sequence number associated With each ?rst track not in
`the bottom portion.
`30
`In step 250, Applicants’ method maintains a second LRU
`list comprising tracks, that have a high reuse potential, i.e.
`second tracks. By “tracks that have a high reuse potential,”
`Applicants mean tracks that are not prestaged in anticipation
`of a host request, tracks that are not knoWn to be a sequential
`access, and tracks that are not brought into the cache for
`copy services.
`In step 260, Applicants’ method associates a second
`sequence number With each of the second tracks. In certain
`embodiments, the second sequence number associated With
`a second track is based upon the later of the time that the
`second track Was brought into the cache, or last accessed
`from the cache, such that tracks that Were recently accessed
`have a larger sequence number than tracks that Were not
`recently accessed.
`The second track that Was most recently used is assigned
`the largest ?rst sequence number, mru_second_sequen
`ceNbr, and the second track that Was least recently used is
`assigned the smallest ?rst sequence number, lru_second_se
`quenceNbr. In certain embodiments, When a second track is
`placed on the second LRU list, or accessed from the cache,
`that second track is associated With a sequence number equal
`to the CurrentSequenceNbr.
`In step 270, Applicants’ method de?nes a bottom portion
`of the second LRU list. In certain embodiments, this bottom
`portion comprises about tWo percent (2%) of number of
`segments comprising the cache. Second tracks not included
`Within the bottom portion of the second LRU list are
`included Within the top portion of the second LRU list. As
`those skilled in the art Will appreciate, each second track in
`the bottom portion of the second LRU list has a sequence
`number less than the sequence number associated With each
`second track not in the bottom portion.
`Applicants’ method transitions from step 270 to step 310
`(FIG. 3). In step 310, Applicants’ method sets a ?rst LRU list
`target siZe. In certain embodiments, step 310 is performed by
`a processor, such as processor 132 (FIG. 1), disposed in
`
`50
`
`40
`
`45
`
`55
`
`60
`
`65
`
`HPE, Exh. 1007, p. 10
`
`
`
`US 7,260,679 B2
`
`20
`
`25
`
`30
`
`35
`
`7
`Applicants’ method transitions from step 360 to step 325
`and continues as described herein.
`If Applicants’ method determines in step 350 that the
`accessed track of step 320 does not comprise a second track
`in the bottom portion of the second LRU list, then the
`method transitions from step 350 to step 370 Wherein the
`method leaves the direction parameter unchanged. Appli
`cants’ method transitions from step 370 to step 325 and
`continues as described herein.
`Referring noW to FIG. 4, in step 405 Applicants’ method
`elects to demote one or more tracks from the cache. Appli
`cants’ method transitions from step 405 to step 410 Wherein
`the method determines the actual siZe of the ?rst LRU list.
`In certain embodiments, step 410 is performed by a proces
`sor, such as processor 132 (FIG. 1), disposed in Applicants’
`information storage and retrieval system, such as system 100
`(FIG. 1).
`Applicants’ method transitions from step 410 to step 420
`Wherein the method determines if the actual siZe of the ?rst
`LRU list is greater than the ?rst LRU list target siZe. In
`certain embodiments, step 420 is performed by a processor,
`such as processor 132 (FIG. 1), disposed in Applicants’
`information storage and retrieval system, such as system 100
`(FIG. 1).
`If Applicants’ method determines in step 420 that the
`actual siZe of the ?rst LRU list is greater than the ?rst LRU
`list target siZe, then the method transitions from step 420 to
`step 430 Wherein the method demotes one or more ?rst
`tracks from the bottom portion of the ?rst LRU list. In
`certain embodiments, step 430 is performed by a processor,
`such as processor 132 (FIG. 1), disposed in Applicants’
`information storage and retrieval system, such as system 100
`(FIG. 1). Applicants’ method transitions from step 430 to
`step 450.
`If Applicants’ method determines in step 420 that the
`actual siZe of the ?rst LRU list is not greater than the ?rst
`LRU list target siZe, then the method transitions from step
`420 to step 440 Wherein the method demotes one or more
`40
`second tracks from the bottom portion of the second LRU
`list. In certain embodiments, step 440 is performed by a
`processor, such as processor 132 (FIG. 1), disposed in
`Applicants’ information storage and retrieval system, such
`as system 100 (FIG. 1).
`Applicants’ method transitions from step 440 to step 450,
`Wherein the method determines the number (n) of segments
`comprising the demoted one or more tracks of either step
`430 or 440. In certain embodiments, step 450 is performed
`by a processor, such as processor 132 (FIG. 1), disposed in
`Applicants’ information storage and retrieval system, such
`as system 100 (FIG. 1).
`Applicants’ method transitions from step 450 to step 460
`Wherein the method calculates a ?rst LRU list target siZe
`adjustment, Wherein that adjustment comprises the multipli
`cation product of (n), ie the number of segments compris
`ing the one or more demoted tracks, and the direction
`parameter set in step 340, or in step 360, or in step 370. In
`certain embodiments, step 460 is performed by a processor,
`such as processor 132 (FIG. 1), disposed in Applicants’
`information storage and retrieval system, such as system 100
`(FIG. 1).
`Applicants’ method transitions from step 460 to step 470
`Wherein the method calculates an adjusted ?rst LRU list
`target siZe by adding the ?rst LRU target siZe adjustment of
`step 460 to the existing ?rst LRU list target siZe. In certain
`embodiments, step 470 is performed by a processor, such as
`
`45
`
`50
`
`55
`
`60
`
`65
`
`8
`processor 132 (FIG. 1), disposed in Applicants’ information
`storage and retrieval system, such as system 100 (FIG. 1).
`The folloWing examples are presented to further illustrate
`to persons skilled in the art hoW to make and use the
`invention. These examples are not intended as a limitation,
`hoWever, upon the scope of the invention, Which is de?ned
`only by the appended claims.
`
`EXAMPLE 1
`
`As a ?rst example, if Applicants’ method determines in
`step 330 that the track accessed in step 320 comprises a ?rst
`track in the bottom portion of the ?rst LRU list, then
`Applicants’ method sets the direction parameter to +1 in step
`340. If Applicants’ method demotes one or more tracks in
`step 430 or 440, and determines in step 450 that the demoted
`tracks