`(12) Patent Application Publication (10) Pub. No.: US 2005/0283682 A1
`Odawara et al.
`(43) Pub. Date:
`Dec. 22, 2005
`
`US 20050283682A1
`
`(54) METHOD FOR DATA PROTECTION IN DISK
`ARRAY SYSTEMS
`
`(52) U.S. Cl. ................................................................ 714/42
`
`(75) Inventors: Hiroaki Odawara, Sunnyvale, CA
`(US); Yuichi Yagawa, San Jose, CA
`(US); Shoji Kodama, San Jose, CA
`(US)
`Correspondence Address:
`TOWNSEND AND TOWNSEND AND CREW,
`LLP
`TWO EMBARCADERO CENTER
`EIGHTH FLOOR
`SAN FRANCISCO, CA 94111-3834 (US)
`(73) Assignee: Hitachi, Ltd., Tokyo (JP)
`(21) Appl. No.:
`10/871,128
`(22) Filed:
`Jun. 18, 2004
`Publication Classification
`
`(51) Int. Cl. .................................................. G06F 11/00
`
`
`
`Host computer
`
`(57)
`
`ABSTRACT
`
`A method and a System for implementing the method are
`disclosed relating to archival Storage of information in large
`numbers of disk units. The reliability of the stored informa
`tion is checked periodically using data verification opera
`tions whose results are saved. These results establish the
`Veracity of the data and enable compliance with various
`regulatory requirements. The techniques described enable
`the use of low cost disk drive technology, yet provide high
`assurance of data Veracity. In a typical System, management
`information Storage is provided in which data entries are
`asSociated with each of the disk drives to provide informa
`tion with respect to the condition of the data on that drive
`and its last verification. The data verification operations are
`performed on the data during time periods when I/O
`accesses are not required.
`
`l Management software
`Management Server
`
`
`
`Disk Unit
`
`97
`
`IPR2020-01218
`Sony EX1028 Page 1
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 1 of 10
`
`US 2005/0283682 A1
`
`
`
`Host computer
`
`
`
`110
`
`
`
`601---'8011
`
`S
`
`61
`-
`
`141
`
`90
`100
`
`3
`
`Disk Unit
`
`97
`
`Figure 1
`
`IPR2020-01218
`Sony EX1028 Page 2
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 2 of 10
`
`US 2005/0283682 A1
`
`
`
`R
`
`.
`
`.
`
`Data verification period
`Current sleeping time
`Data verification option
`Unit for verification
`Last checked address
`ldle Condition
`
`50
`
`Entry for
`-
`logical/physical device
`if N-1
`
`Entry for
`logical/physical device
`N
`
`Entry for
`-
`logical/physical device
`it N+1
`
`Current mode field :
`O = Normal mode (with power saving operation)
`& the logical/physical devices are in operation
`1 = Normal mode (with power saving operation)
`& the logical/physical devices are out of operation (being turned off)
`2 = Normal mode (with power saving operation)
`& the logical/physical devices are in data verification mode
`3 = Full operation mode (no power saving operation)
`4 = Error handling mode
`5-15 = reserved
`
`Figure 2
`
`IPR2020-01218
`Sony EX1028 Page 3
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 3 of 10
`
`US 2005/0283682 A1
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`System initialization
`
`Set Current mode to "O"
`
`Reset idle status
`
`POWer off Criterion met 2
`
`Yes
`- Set Current mode to "1"
`- Reset Current sleeping time
`- Power off the pair
`
`No
`
`408
`
`412
`
`- Set Current mode to "2"
`- Complete data verification
`Unit of operation
`
`
`
`No
`Update
`sleeping time
`
`413
`
`Figure 3
`
`IPR2020-01218
`Sony EX1028 Page 4
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 4 of 10
`
`US 2005/0283682 A1
`
`
`
`Read data from each of the pair and compare them
`- Starting the address of last checked address + Unit for verification
`- Until the whole unit-for-verification entity is checked
`
`Error found?
`
`- Store result to the check log
`- Update Last checked address
`
`Figure 4
`
`IPR2020-01218
`Sony EX1028 Page 5
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 5 of 10
`
`US 2005/0283682 A1
`
`
`
`
`
`1
`
`1.
`
`1u
`
`11
`
`G?un61-I
`
`IPR2020-01218
`Sony EX1028 Page 6
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 6 of 10
`
`US 2005/0283682 A1
`
`
`
`201
`202
`203
`204
`205
`
`Time Stamp #0
`Check status at Time #0
`. Time Stamp #1
`Check status at Time #1
`
`.
`
`Figure 6
`
`51
`
`Entry for
`logical/physical device
`# N-1
`
`-
`
`Entry for
`logical/physical device
`N
`
`Entry for
`logical/physical device
`N+1
`
`IPR2020-01218
`Sony EX1028 Page 7
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 7 of 10
`
`US 2005/0283682 A1
`
`
`
`/ ?un61-I
`
`
`
`
`
`999999#799 999zºº| 99 099
`
`IPR2020-01218
`Sony EX1028 Page 8
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 8 of 10
`
`US 2005/0283682 A1
`
`1
`
`
`
`Host computer
`
`117
`
`O O O
`
`30 l Management software
`Management Server
`
`w
`7
`
`4.
`
`50a
`
`d
`s
`59 5-O
`
`st
`
`O
`
`
`
`d
`Shared
`Memory
`123
`Stora
`de Controller
`
`12Ob
`
`Management
`
`50b It
`
`123
`
`g
`
`... O
`
`2 5
`
`console
`
`Micro
`
`is
`
`- Storade Controller
`
`2b
`
`130
`
`131
`
`
`
`IPR2020-01218
`Sony EX1028 Page 9
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 9 of 10
`
`US 2005/0283682 A1
`
`601
`
`System initialization
`
`- Set current mode to "1"
`- Reset current sleeping time
`
`602
`
`
`
`complete requested
`I/O operation
`
`Receive host I/O request ? X11603
`-
`No
`
`604
`
`6O7
`
`
`
`Sleeping time = data verification period?
`
`w
`
`Yes
`- Set Current mode to "2"
`- Complete data verification
`Unit of operation
`
`
`
`No
`
`Update
`sleeping time
`
`605
`
`606
`
`Figure 9
`
`IPR2020-01218
`Sony EX1028 Page 10
`
`
`
`Patent Application Publication Dec. 22, 2005 Sheet 10 of 10
`
`US 2005/0283682 A1
`
`701
`
`System Initialization
`
`702
`
`Set Current mode to "O"
`
`
`
`Yes
`complete requested
`I/O operation
`
`Reset lale status
`
`Receive host I/O request?
`No
`Update idle status
`
`
`
`Power off Criterion met 2
`
`NO
`
`710
`
`Yes
`
`- Set Current mode to "1"
`- Power off the pair
`
`707
`
`708 1-1C Receive host I/O request?
`Yes
`- Set Current mode to "O
`- Power on the physical pair
`- Reset lodle Condition
`
`
`
`709
`
`Figure 10
`
`IPR2020-01218
`Sony EX1028 Page 11
`
`
`
`US 2005/0283682 A1
`
`Dec. 22, 2005
`
`METHOD FOR DATA PROTECTION IN DISK
`ARRAY SYSTEMS
`
`BACKGROUND OF THE INVENTION
`0001. This invention relates generally to storage systems,
`and in particular to the long term reliable Storage of Verifi
`able data in Such Systems.
`0002 Large organizations throughout the world now are
`involved in millions of transactions which include enormous
`amounts of text, Video, graphical and audio information
`which is categorized, Stored, accessed and transferred every
`day. The Volume of Such information continues to grow
`rapidly. One technique for managing Such massive amounts
`of information is the use of Storage Systems. Conventional
`Storage Systems can include large numbers of disk drives
`operating under various control mechanisms to record, back
`up and enable reproduction of this enormous amount of data.
`This rapidly growing amount of data requires most compa
`nies to manage the data carefully with their information
`technology Systems.
`0.003
`Recently, new standards have been promulgated
`from various governmental entities which require corpora
`tions and other entities to maintain data in a reliable manner
`for Specified periods. Such regulations, for example the
`Sarbanes-Oxley Act and the SEC regulations, require public
`companies to preserve certain busineSS information which
`can amount to hundreds of terabytes. As a result, Such
`organizations Seek technologies for the management of data
`in a cost-effective manner by which data which is accessed
`infrequently is migrated to low performance or less expen
`Sive Storage Systems. This factor combined with the con
`tinuing reductions in manufacturing costs for hard disk
`drives has resulted in disk drives replacing many magnetic
`tape and optical disk library functions to provide archival
`Storage. AS the cost per bit of data Stored in hard disk drives
`continues to drop, Such systems will be increasingly used for
`archival Storage.
`0004 Traditional high performance disk-based storage
`Systems for enterprise information technology are usually
`equipped with high performance, high reliability hard disk
`drives. These Systems are coupled to Servers or other com
`puters using high Speed interfaces Such as FibreChannel or
`SCSI, both of which are known standard protocols for
`information transfer. On the other hand, personal computers
`and inexpensive Servers often utilize low performance,
`lower reliability disk drives with conventional low speed
`interfaces such as ATA or IDE. The lower reliability and
`performance of such hard disk drives allow them to be
`manufactured in mass production with low prices. These low
`priced disk drives can often be used in Storage System
`products for archival Storage. Examples include the Clarion
`and Centera products from EMC, NearStore from Network
`Appliance, and BladeStore from StorageTek.
`0005. In archival storage, the archived data is accessed
`only intermittently, for example on the order of a few times
`per year. As a result, performance is not an issue in the usual
`situation, but reliability is still of utmost concern. In addition
`to the usual internal desires for the retention of information
`in a reliable manner, often the data on these Storage Systems
`is covered by governmental regulations which require that it
`not be lost or modified. In addition, the low frequency of
`access to the data allows System designers to design the
`
`system in a manner by which the disk drives are turned off
`when they are not accessed, thereby reducing power con
`Sumption. Unfortunately, keeping hard disk drives off for
`long periods of time can also cause corruption of the
`recording media and the read/write devices. In many Such
`archival Systems, intentional, or even accidental modifica
`tion of the data, for example by manual operator intervention
`or by Software, is blocked using Secure authentication
`mechanisms. To maintain highest data reliability, any data
`corruption or Sector failure on one of the hard disk drives
`needs to be recovered, or at least detected and reported.
`0006 Accordingly, there is a need for storage systems
`using disk arrays by which low reliability hard disk drives
`can be employed in a reliable way, yet be prevented from
`data corruption or loSS of data.
`
`BRIEF SUMMARY OF THE INVENTION
`0007. In a typical implementation invention, a controller
`and a group of typically lower reliability hard disk drives are
`provided. Data Stored on these disk drives is periodically
`retrieved and Verified to assure reliability. A mechanism is
`provided to selectively turn off the hard disk drives when
`there is no access from the host computer connected to the
`Storage System. The controller periodically turns on various
`ones of the disk drives when necessary and when it conducts
`a data verification procedure. The results of the Verification
`procedure are Stored into a nonvolatile, Secure location So
`that the information can be accessed by anyone seeking
`assurance about the integrity of the archived data.
`0008. The archived data can be verified using a number
`of different techniques. In one technique all of the data in a
`particular disk drive is read Sequentially. If any bad Sectors
`are detected, the Sector number is reported to the controller,
`and the controller then performs a data recovery process.
`The data recovery process can employ typical RAID mir
`roring, back up and error correction technology. Such tech
`nology generally allows a back up copy, parity bits or error
`correction codes to be read and used to correct bad data. This
`enables the System controller to detect accidental or spon
`taneous corruption in a bit-wise manner, normally not
`detectable by reading the hard disk drive data.
`0009. Alternatively, the data verification may employ
`multiple read Sequences spread over a longer time period.
`For example, during a specified verification period, only a
`predetermined portion of the data on the hard disk drive will
`be read and Verified. Such a verification operation is later
`repeated for every portion of the disk drive, thereby verify
`ing the entire contents of the disk drive. This procedure is
`especially useful in newer, large capacity hard disk drives,
`often having Storage capabilities of hundreds of gigabytes,
`because it may take many hours (e.g., ten) to check all of the
`data in conjunction with the parity groups. It is also desirable
`to Segment the data verification procedure because, from a
`mechanical viewpoint, it is preferable for ATA-type disk
`drives to be turned on frequently.
`0010. The techniques of this invention may be imple
`mented using a variety of approaches. In one implementa
`tion a management table is provided for maintaining infor
`mation about the contents of each disk drive. The
`management table can include information Such as whether
`the hard disk drive is on, off, or in a data verification mode.
`The table also may store when the drive was last turned on,
`
`IPR2020-01218
`Sony EX1028 Page 12
`
`
`
`US 2005/0283682 A1
`
`Dec. 22, 2005
`
`and for how long it has been off. Data may also be provided
`in the management table for indicating whether the data
`Verification procedures are to be performed in a particular
`manner, and, if they have been performed in a Segmented
`manner, providing an indication of the last Sector or other
`address of the disk drive checked. Records may be main
`tained of parity groups and logical units, and of the time the
`disk drive has been on, or the time Since data verification
`began.
`Preferably, systems implementing the techniques
`0.011
`of this invention provide data Verification. In one implemen
`tation a microprocessor is provided in a storage controller
`coupled to the array of disks. The microprocessor performs
`the data verification, for example, by doing a comparison
`operation between each drive in a mirrored pair. Passes or
`failures of the comparison are marked as Such. If desired,
`Special purpose hardware for parity checking or error cor
`rection codes may also be included. It is also desirable to
`provide control features for the data verification operations,
`for example changing the Settings or procedures for Such
`operations. Such Setting changes can be implemented using
`commands from a Service processor, or commands issued
`from a terminal or keyboard by a maintenance engineer. Of
`course, commands may also be issued through a network
`coupled to the Storage System.
`0012.
`It is also advantageous in Systems implementing
`the techniques of this invention to Store the data verification
`logs in a manner to control access to them. AS Such they may
`be encrypted or Secured using a variety of known tech
`niques. Such an approach helps assure that an audit of the
`System will demonstrate the integrity of the archived data.
`0013 In other implementations of the invention, it is
`desirable to have a special mode for the Storage System
`operation in which all the disk drives are maintained with
`power on. When an audit of the archived data is performed,
`the Search can be done more quickly.
`0.014.
`In a preferred embodiment for a storage system
`having a plurality of disk drives, each with Stored data, a
`System for Verifying the integrity of the Stored data includes
`a management information Storage which includes data
`entries therein associated with the disk drives and which
`indicate whether the data stored on the disk drives has been
`Verified. The verification operations themselves are typically
`performed by a processor, usually a microprocessor, Situated
`in the Storage controller managing the operation of the disk
`array.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0.015
`FIG. 1 is a block diagram for storage system
`illustrating a typical implementation for one embodiment of
`the invention;
`0016 FIG. 2 illustrates a management table for use in
`conjunction with FIG. 1;
`0017 FIG. 3 illustrates a typical verification log;
`0.018
`FIG. 4 is a flowchart illustrating a first portion of
`the operation of the system shown in FIG. 1;
`0.019
`FIG. 5 is a flowchart illustrating additional opera
`tions for the system shown in FIG. 1;
`0020 FIG. 6 illustrates the time sequence for a typical
`operation on a pair of disk drives,
`
`FIG. 7 illustrates a typical time sequence for
`0021
`another operation mode,
`0022 FIG. 8 is a block diagram illustrating a system
`configuration for another embodiment of the invention;
`0023 FIG. 9 is a flowchart illustrating operations within
`the Storage controller; and
`0024 FIG. 10 illustrates additional operations within the
`Storage controller.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`0025 FIG. 1 is a block diagram of a typical system
`configuration for a first embodiment of this invention. FIG.
`1 illustrates the basic components of a typical System,
`including a host computer 1 coupled through a Storage
`controller 2 to a disk unit 3. An interface 110, 117 couples
`the host computer to the Storage controller, while another
`interface 130, 131, 140,141 couples the storage controller to
`the disk unit. Preferably the disk unit will include a large
`number of disk drives, as represented by the cylindrical
`shapes in FIG. 1.
`0026. The storage controller 2 includes shared memory 4,
`a service processor 5, and processors 10 . . . 17. The
`processors 10... 17 are preferably microprocessors, and are
`coupled to local memories 20... 27 which store the program
`and/or data used by the microprocessors 10 . . . 17. In some
`implementations the local memory may be implemented as
`a ROM on the same chip as the microprocessor circuitry.
`The shared memory 4 is shared among all of the micropro
`cessors via a signal line 120. (For simplicity in the diagram,
`the interconnections among components shown in FIG. 1
`are illustrated as Single lines. In an actual implementation,
`however, these Single lines will usually be implemented as
`a bus with a plurality of lines for address, data, control, and
`other signals.)
`0027 Shared memory 4 includes a management table 50
`which is described below. The microprocessors 10... 17 are
`also connected, preferably using a local area network 121, to
`a Service processor 5 which handles various operations for
`maintenance purposes in the System. This Service processor
`is typically connected via interconnection 123 to a manage
`ment server 7 which contains management software 30 for
`controlling the operations of the Service processor 5. In
`addition, a console 6 is coupled to processor 5 via line 122
`to enable manual operations to be performed on SVP 5.
`0028. The disk unit 3 includes many units for storing
`information. These are preferably hard disk drives or other
`well known Storage apparatus. In the case of hard disk
`drives, the drives 60-67, 70-77, 80-87 and 90-97 are all
`interconnected via buses 130, 131, 140 and 141 to the
`microprocessors 10-17. In the typical implementation the
`disk drives are paired to provide RAID 1 functionality. For
`example, drives 80 and 90 provide two disk drives in a
`mirrored pair. The number of disk drives, microprocessors
`and particular RAID or other redundancy techniques
`selected can be altered for different implementations of the
`storage systems shown in FIG. 1.
`0029. The system illustrated in FIG. 1 has two major
`modes of operation-a “normal” mode and a “full opera
`tion” mode. The particular mode of operation is specified by
`
`IPR2020-01218
`Sony EX1028 Page 13
`
`
`
`US 2005/0283682 A1
`
`Dec. 22, 2005
`
`a field in management table 50 (as shown in FIG. 2).
`Depending upon the particular implementation, the mode
`chosen may be based on mirrored pairs or on other physical/
`logical configurations, for example, a RAID 5 parity group,
`etc. In the normal mode of operation, the hard disk drives in
`a mirrored pair are off, and are turned on for data verification
`operations or input/output (I/O) operations. This form of
`operation will be described in detail below. In the full
`operation mode, the mirrored pair is always running and is
`never turned off. This mode of operation is also described in
`detail below.
`0030 Setting and changing the modes of operation may
`be implemented in different ways. In one implementation an
`operator uses console 6 to set or change mode Settings by
`Specifying the identification of the disk drive pair and the
`desired mode. In another implementation management Soft
`ware 30 Sets or changes the mode Settings using an appli
`cations programming interface (API) with the SVP 5. In
`either case the specified mode setting is handled by the SVP
`5 and communicated to the appropriate microprocessor
`10-17 with the management table 50 also being updated at
`that time, usually by the microprocessor 10-17.
`0.031
`FIG. 2 is a diagram illustrating a typical imple
`mentation of the management table 50 shown in FIG. 1. As
`mentioned above, each mirrored pair of disk drives in FIG.
`1 has an entry in the table, and those entries have a common
`format. For example, the entry 52 for the Nth one of the pairs
`100 typically includes the information shown within dashed
`line 52. The current mode field 53 identifies the current
`operational mode of the corresponding pair. Typical contents
`in a preferred embodiment for the “current mode” register or
`table entry are shown in the lower portion of FIG. 2. For
`example, a “0” in field 53 indicates that this pair is in the
`normal mode of operation and is implementing a power
`Saving feature. In a similar manner, a “1” in that field
`indicates that the mirrored pair is in the normal mode of
`operation, but is turned off. A “2 indicates a normal mode
`of operation, but a data checking or Verification operation; a
`“3” indicates full operational mode with no power-saving
`implemented. A “4” indicates an error-handling mode. Of
`course, additional modes can be specified using additional
`data if desired.
`0032) Field 54 in the management table 50 shown in FIG.
`2 identifies the time period when data verification or check
`ing is to be triggered. The field 55"current sleeping time”
`will have the duration Since the responding pair has been
`powered off. (This field is valid only when the current mode
`field 53 is “1,”) The field “data verification option” specifies
`whether data verification operations are to be performed.
`Preferably, a “0” in field 56 means data verification will not
`be performed, while a “1” means that data verification will
`be performed in an intermittent Sequence for the correspond
`ing disk pair. The “divided Sequence” indicates the portion
`of the disk drive upon which data is to be verified during a
`given operation, for example, a cylinder. The Specific unit of
`verification is encoded within field 57. In field 57 a “O’ can
`be used to designate a logical cylinder, and a “1” may be
`used to designate eight cylinders. The Specific units will
`depend upon the particular implementation chosen.
`0033. The field “last checked address'58 is used to
`identify the address of the portion of the disk pair that was
`checked in the latest data verification operation. Each time
`
`a data verification operation is performed, this field is
`updated. The “idle condition” field 59a identifies the status
`of the pair while it is idle; in other words, designating
`whether the pair in operation is in normal mode, but not
`processing I/O requests. AS above, the particular usage of
`this field will depend on the particular implementation. In
`the preferred embodiment the final field “power off
`criterion'59b for the management table 50 shows the crite
`rion by which the responding pair will be determined to be
`powered off. Although the usage of this field is dependent
`upon the particular implementation, typically it will be a
`measure of the maximum duration of idle time before power
`down is performed.
`0034 FIG. 3 is a flowchart illustrating a preferred
`method of operation of the system depicted in FIG. 1. This
`flowchart illustrates the operation of a mirrored pair 100 in
`normal mode. After being turned on, the System is initialized
`401, and at that time or shortly thereafter, the current mode
`field 53 is set to “0” at step 402 by microprocessor 17. The
`idle status field 59a is also reset as shown by step 403. The
`system then moves to step 404 where it awaits host I/O
`requests. If Such a request is received, microprocessor 17
`processes the request with disk drive pair 100 as shown at
`step 407. When the processing is completed, system status
`reverts back to step 403.
`0035) If no I/O request is received, then the processor 17
`updates the idle status 59a as shown by step 405 and checks
`if the current status 59a conforms with the criterion 59b at
`step 406. If the power off criterion is not met, the idling
`process of steps 404, 405, and 406 continues to repeat. When
`the power off criterion 59b is met, then the current mode
`field 53 is reset to “1” and the current sleeping time is reset.
`The pair is then turned off, as shown by step 408.
`0036) Events that occur for I/O requests made to disk
`drive pairs which are off are shown in the lower portion of
`FIG. 3. In this situation an I/O request is received from a
`host computer 1 to the pair 100 and detected at step 409.
`Upon detection the processor 17 sets the current mode field
`to “0” and has the pair turned on and the idle status reset.
`Operation then transitions to step 407 to complete the I/O
`operation.
`0037. In the usual case, no I/O operation will be received
`from the host, and operation will transition to step 410. At
`this step processor 17 checks to determine whether the
`Sleeping time has reached the data verification preset period
`at step 410. If the result is “no,” then the processor updates
`the sleeping time 55 and repeats the iterations of step 409
`and 410. On the other hand, if the answer is “yes,” then the
`processor Sets the current mode to '2' and proceeds with the
`data verification process 412. The details of the verification
`process are described below.
`0038 FIG. 4 is a flowchart illustrating in detail the
`operations carried out at step 412 in FIG. 3. As shown at step
`510, after setting the current mode field 53 to “2,” the pair
`is turned on and the data checking verification field 56 is
`detected at step 502. If field 56 is “0,” no action is taken and
`the process repeats from step 402 (as shown in step 503). If
`the data verification field detected at step 502 is not “0, then
`field 57 is fetched. As shown at step 504 and 505, if the field
`is “1,” then the “unit for verification' is retrieved; if the field
`is “2, then the “last checked address' field 58 is reset, as
`shown at step 505. Control then moves to step 506. In each
`
`IPR2020-01218
`Sony EX1028 Page 14
`
`
`
`US 2005/0283682 A1
`
`Dec. 22, 2005
`
`of these cases, the processor 17 repeats reading the data from
`each of the pair of drives 100 and compares them with each
`other, as shown at Step 506, until the process is complete.
`Completion of the process will depend upon the “unit for
`checking” field 57 and the “last checked address” field 58.
`If there is any error as detected at step 507, the processor sets
`the "current mode” field 53 to “4” and starts an error routine
`at step 508. On the other hand, if no errors are detected, then
`at step 509 the processor updates the last-checked address
`field 58 and stores the results in the check log 51 (see FIG.
`1). The verification log is shown in FIG. 5 and is discussed
`below. At step 510 a determination is made as to whether all
`units have been checked. If they have, then the last-checked
`address field 58 is reset, as shown at step 511, and the
`process returns to step 402 in FIG. 3. If not all of the units
`have been checked, process flow moves to step 512 where
`a determination is made of whether other units need to be
`checked. In performing this determination, the data-Verifi
`cation option field is checked, and if found to be “1,” the
`process is repeated from step 506. If that field is “2, the
`process is repeated beginning at step 402 in FIG. 3.
`0039. If while the data verification process shown in
`block 412 is being performed, a new I/O request is received
`from the host computer, the processing of this I/O request is
`given priority. The data verification operations are then
`performed when the processor 17 and/or the mirrored pair
`100 are not involved in host I/O processing.
`0040 FIG. 5 is a diagram illustrating the time sequence
`for a typical operation on a mirrored pair in the normal
`mode. Time is indicated as passing in the diagram as
`operations move from left to right. Once a data verification
`operation 300 is complete, there may be an idle period 321.
`After a certain period of time when the power off criterion
`is satisfied, the mirrored pair will be turned off until after the
`data verification period 310 has elapsed. After this period
`has passed, the mirrored pair is turned on, and data verifi
`cation 301 for the next unit is started. AS before, once this
`proceSS is complete, there may be an idle period 322
`followed by a sleep 311.
`0041) If an I/O request from the host computer to the
`mirrored pair is received before the data verification period
`expires, for example as shown by 302, then to avoid delay,
`the mirrored pair is powered on, and the I/O request pro
`cessed. Once this is complete, another idle period 323
`begins, followed by an inactive period 312. As before, if no
`I/O request occurs during data verification period 312, the
`process for the next unit 303 is then performed, again
`followed by an idle period 324 and a sleep period 313. As
`illustrated near the right-hand Side of the diagram, if, during
`the data Verification operations an I/O request is received,
`the data verifications operations are performed in the back
`ground. This requires a longer period being required than
`data Verification operations performed in the foreground,
`such as operations 300, 301 and 303.
`0.042
`FIG. 6 is a diagram illustrating the log 51 origi
`nally depicted in FIG. 1. The log is typically maintained
`within the service processor 5, and stored the results of the
`data verifications operations. In the depicted embodiment,
`log 51 has entries corresponding to the verification results
`for each of the corresponding mirrored pairs. The particular
`format, of course, will depend upon the particular imple
`mentation, and formats other than as depicted in FIG. 6 can
`
`readily be employed. The implementation shown in FIG. 6
`is a simple log format in which results for all pairs are Stored
`in chronological order, together with a time Stamp, irrespec
`tive of the ID of the particular pair. In this format, it is
`necessary that the pair ID is associated with the Verification
`results.
`0043. In a typical example such as entry 200, pairs of data
`“time stamp' and “check status” are stored as illustrated.
`Once the log is filled, any desired algorithm may be used to
`replace entries to Store new information, for example, by
`deleting the oldest log entry present in the register. For
`implementations in which divided verification, whether
`based on cylinders, Sectors, or other basis, the “check Status'
`field will typically include the “last checked address'58 in
`addition to the results.
`0044 FIG. 7 is a timing diagram similar to FIG.5. FIG.
`7, however, illustrates the System in a full operation mode.
`The full operation mode, as mentioned above, is most useful
`when quick Searching over a large amount of archived data
`is required. The full operation mode prevents power on and
`power off Sequences which require Substantial time and have
`a significant impact on disk performance and access time. If
`the System is employed to archive information for the
`purposes of regulatory compliance, auditing of the Stored
`records is a typical situation when full operation mode is
`useful. As shown by FIG. 7, the sleep mode is illuminated.
`In addition, the figure illustrates typical large amounts of I/O
`operations. In the illustrated situation I/O operations 330,
`332,334 and 336 are processed with idle periods 331, 333
`and 335 intervening. Note that no data verification opera
`tions are performed in this mode.
`004.5
`FIG. 8 is a block diagram illustrating another
`embodiment of the data protection System for disk arrayS.
`The primary difference between this configuration and the
`configuration depicted in FIG. 1 is that the storage controller
`2a is connected via line 118 to another storage controller 2b.
`(In the implementation of FIG. 1, the storage controller was
`connected directly to the disk units without an intervening
`Storage controller.) The advantage of the configuration
`shown in FIG. 8 compared to that of FIG. 1 is that the
`System features which are diff