throbber

`
`
`
`UNIFIED PATENTS
`
`EXHIBIT 1004
`
`UNIFIED PATENTS
`
`EXHIBIT 1004
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 1
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 1
`
`

`

`(12) United States Patent
`US 6,701,410 B2
`(10) Patent N0.:
`Matsunami et al.
`
`(45) Date of Patent: Mar. 2, 2004
`
`USOO6701410B2
`
`(54) STORAGE SYSTEM INCLUDING A SWITCH
`
`(75)
`
`Inventors: Na0t0 Matsunami; Sagamihara (JP);
`Takashi Oeda; Sagamihara (JP); Akira
`Yamam0t0; Sagamihara (JP); Yasuyuki
`Mimatsu; Fujisawa (JP); Masahiko
`Sat0; Odawara (JP)
`
`(73)
`
`Assignee: Hitachi, Ltd.; Tokyo (JP)
`
`(*)
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`(21)
`
`(22)
`
`(65)
`
`Appl. No.: 10/095,578
`
`Filed:
`
`Mar. 13, 2002
`Prior Publication Data
`
`US 2002/0095549 A1 Jul. 18, 2002
`
`Related US. Application Data
`
`FOREIGN PATENT DOCUMENTS
`
`EP
`JP
`JP
`JP
`
`980 041
`08/328760
`09/198308
`06—309186
`
`2/2000
`12/1996
`7/1997
`11/1999
`
`........... GO6F/1 1/20
`.....
`.. GO6F/03/06
`
`.....
`.. GO6F/12/08
`........... GO6F/11/20
`
`OTHER PUBLICATIONS
`
`Chiang et al. “Implamentation of STARNET: A WDM
`Computer Communications Network”; IEEE; pp. 824—839;
`Jun. 1996*
`
`“A Case for Redundant Arrays of Inexpensive Disks
`(RAID)”; Proc. ACM SIGMOD; Jun. 1988; D. Patterson et
`al; pp. 109—116.
`Serial SCSI Finally Arrives on the Market; NIKKEI
`ELCTRONICS; No. 639; 1995; p. 79.
`
`* cited by examiner
`
`Primary Examiner—Matthew Kim
`Assistant Examiner—Stephen Elmore
`(74) Attorney, Agent, or Firm—Mattingly; Stanger &
`Malur; PC.
`
`(63)
`
`Continuation of application No. 09/468,327, filed on Dec.
`21, 1999.
`
`(57)
`
`ABSTRACT
`
`(30)
`Dec. 22, 1998
`
`Foreign Application Priority Data
`
`(JP)
`
`........................................... 10—364079
`
`...................... G06F 12/00
`
`Int. Cl.7 ......
`(51)
`711/114; 711/130; 711/161;
`(52) US. Cl.
`........................
`711/162
`
`(58) Field of Search ................................. 711/114; 130;
`711/161; 162; 710/316; 317; 104; 8; 9;
`10
`
`(56)
`
`References Cited
`
`US. PATENT DOCUMENTS
`
`5,140,592 A
`5,237,658 A
`5,423,046 A
`
`................ 714/5
`8/1992 Idleman et al.
`
`.......
`395/200
`8/1993 Walker et al.
`6/1995 Nunnelley et al.
`.......... 395/750
`
`A disk storage system containing a storage device having a
`record medium for holding the data; a plurality of storage
`sub-systems having a controller for controlling the storage
`device; a first interface node coupled to a computer using the
`data stored in the plurality of storage sub-systems; a plurality
`of second interface nodes connected to the storage sub-
`systems; a switch connecting to a first interface node and a
`plurality of second interface nodes to perform frame transfer
`therebetween based on node address information added to
`
`the frame. The first interface node has a configuration table
`to store structural
`information for
`the memory storage
`system and in response to the frame sent from the computer;
`analyzes the applicable frame; converts information relating
`to the transfer destination of that frame based on structural
`
`information held in the configuration table; and transfers that
`frame to the switch.
`
`(List continued on next page.)
`
`29 Claims, 22 Drawing Sheets
`
`Diskarray
`
`Diskarray
`subset
`
`Diskarray
`subset
`
`subset
`
`Diskarray
`subset
`#3
`
`Diskarray
`configuration
`manager
`
`Mana ement
`consoe
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 2
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 2
`
`

`

`US 6,701,410 132
`
`Page 2
`
`US. PATENT DOCUMENTS
`
`574577703 A
`-------------- 714/766
`10/1995 Kakuta 6t 91-
`5574950 A
`~~~~~ 710/8
`11/1996 Hathorn etal-
`
`5,581,735 A
`~~ 711/169
`12/1996 Kajitani 6t al~
`5,606,359 A
`2/1997 Yonden 6t al~
`~~ 725/8 B
`5,729,763 A
`5/1998 Leshem ~~~~~
`710/38
`
`5,752,256 A
`~~ 711/114
`5/1998 FUjiietaL -
`
`5,835,694 A * 11/1998 Hodges ~~~~~~~~~~~
`~~ 711/114
`..
`.. 711/114
`5,974,503 A
`10/1999 Venkatesh et al.
`8/2000 Surugucchi et a1.
`.......... 710/10
`6,098,119 A *
`
`............... 345/472
`8/2000 Muller et al.
`6,105,122 A *
`10/2000 McDonald et al.
`............ 710/6
`6,138,176 A
`6,148,349 A * 11/2000 Chow et al.
`................ 709/214
`6,173,374 B1 *
`1/2001 Heil etal.
`.....
`711/148
`
`6,247,077 B1 *
`6/2001 Muller et al.
`..
`710/74
`6/2001 Yamamoto ........
`6,253,283 B1
`711/114
`
`6,263,374 B1
`7/2001 Olnowich et al.
`709/253
`6,289,376 B1
`9/2001 Taylor et al.
`.....
`709/219
`6,493,750 B1
`12/2002 Mathew etal.
`............. 709/220
`
`* cited by examiner
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 3
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 3
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 1 0f 22
`
`US 6,701,410 B2
`
`FIG.1
`
`I
`
`Diskarray
`subset
`#0
`
`Diskarray
`subset
`#1
`
`Diskarray
`configuration
`manager
`
`Diskarray
`subset
`
`Diskarray
`subset
`
`Mana ement
`conso e
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 4
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 4
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 2 0f 22
`
`US 6,701,410 B2
`
`FIG.2
`
`Diskarray I/F
`
`Diskarray l/F
`
`
`
`
` Diskarray
`subset con-
`
`
`figuration
`
`manager
` memory/shared‘ memory/shared
`.memory
`
`
`
` Diskl/F *
` Diskarray
`
`I' 104
`
`105
`
`8t
`
`
`
`
` Diskarray system
`configuration
`
`manager 70
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 5
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 5
`
`

`

`U.S. Patent
`
`e
`
`US 6,701,410 132
`
`us60...n3601"S301us“moI
`
`ON
`
`90.".
`
`
`
`ME“awn£00“fig“
`
`
`
`>mtmxm5M3m
`,“S“55:9:=50w.cosmoncflfimfimE03?
`
`£256228me
`
`%rowmmooi
`
`Soumcagms:
`
`omoma:
`
`
`awow50:2639320
`n.@5me9.8M.us53.22:.“S56222:.
`
`Us5535“5>95me“5>25mem:>25me
`
`mu»QUO:Nfiwflocin0—00:0%0—00:
`
`
`3:8355:9353:865>93me
`“S"Susus
`
`
`
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 6
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 6
`
`
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 4 0f 22
`
`US 6,701,410 132
`
`FIG.4
`
`2010
`
`2010
`
`2010
`
`2010
`
`
`
`2010
`
`2010
`
`2010
`
`2010
`
`FIG.5
`
`203
`
`To
`Host I/F
`
`IC
`
`(Interface
`Controller)
`
`SC
`[Switching
`Controller]
`
`Host IIF node
`
`2026
`
`ET
`
`[Exchange Table]
`
`DCT
`
`[Diskarray
`Config.Table]
`
`2027
`
`
`
`SPG
`[Switching
`Packet
`Generator]
`
`2024
`
`SP
`
`2021
`
`[Searching Processor]
`
`Crossbar
`Switch 201
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 7
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 7
`
`

`

`US. Patent
`
`hdar.2,2004
`
`Sheets 0f22
`
`US 6,701,410 132
`
`EEEEEEQEQEEE
`
`<96.“—
`
`
`
`
`
`298.8:959:003.3.8:
`
` onmom
`9933:933:00Emfi>w>95meI3,
`
`
`Inl-nI-nllnIE
`I'llgags:530as30a:a.023%
`Inl-nllnllnIEIIl-llEHEEEEEHEEEEEIEEH
`
`I'll
`
`E»
`
` I.IH.02262H:E.02to;$85a2an5:833:00302"S288me
`
`
`IEIE
`IIH
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 8
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 8
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 6 0f 22
`
`US 6,701,410 B2
`
`FIG.SB
`
`RAID Group Configuration Table
`
`202720
`
`202721
`202722
`202723
`
`.-
`-1
`
`Stripe Size
`
`Disks
`4
`
`
`
`LU Configuration Table
`202740
`RAID Group
`Alt Port
`I: “fill-
`—
`-——-——
`
`--—— :-
`
`Diskarray Subset Configuration Table (Subset#O)
`
`Diskarray Subset Configuration Table (Subset#1)
`Diskarra Subset Confi-uration Table Subset#2
`
`Diskarray Subset Configuration Table (Subset#3)
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 9
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 9
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 7 0f 22
`
`US 6,701,410 132
`
`FIG.7
`
`400
`
`401
`
`402
`
`40
`
`403
`
`404
`
`SOF Frame
`
`Frame
`payload
`
`CRC EOF
`
`4byte 24byte
`
`O~2112byte
`
`4byte 4byte
`
`F|G.8
`
`
`
`
`
`
`
`401
`
`
`
`
`
`——m_
`—_
`
`
`
`SEQ_ID DF_CTL
`
`SEQ_CNT
`
`—_
`
`
`FIG.9
`
`402
`
`LUN (High)
`
`LUN (Low)
`CNTL
`
`
`
`
`
`
`
`
`
`CDB (wordO)
`
`CDB (word1 )
`
`CDB (word2)
`
`CDB (word3)
`
`
`
`Data Lenght
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 10
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 10
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 8 0f 22
`
`US 6,701,410 B2
`
`FIG.10
`
`Host
`
`FCP_CMD
`
`FCP XFER RDY
`_
`
`Host l/F
`
`_
`
`FCP_CMD
`
`Diskarray switch
`
`Diskarray subset
`
`
`
`
`FCP_DATA
`FCP_RSP
`
`
`Diskarray I/F
`
`
`FCP_DATA
`FCP_XFER_RDY
`FCP_RSP
`
`I. Diskarray
`
`
`
`subset#o
`
`A
`E A“\“‘
`
`A
`Read
`req U est" a ‘\\\\\\‘
`per Host#2
`
`
`
`
`Diskarray
`subset#1
`
`n1
`
`nO+n1
`+n2+n3
`
`Diskarray
`
`subset#2 l-
`subset#3 [-
`
`Diskarray
`
`1000
`
`1100
`
`DiSkO DiSk1 DiSk2 DiSk3
`
`
`
`--%‘/A- 1201
`
`-%'///gg '
`
`
`
`
`
`‘
`
`‘ \
`
`1200
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 11
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 11
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 9 0f 22
`
`US 6,701,410 B2
`
`FIG.12
`
`501
`
`60
`
`40
`
`
`
`Expansion
`header
`601
`_
`
`
`
`
`Transfer original
`node No.
`
`Transfer destination
`node No.
`
`Transfer length
`
`
`
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 12
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 12
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 10 0f 22
`
`US 6,701,410 B2
`
`FIG.13A
`
`FIG.13B
`
`FIG.13C
`
`
`
`Command _
`frame receive
`
`
`processmg
`
`lC receives FCP_
`CMD from host.
`
`SC stores
`FCP_CMD in F3.
`Execute CRC check.
`
`
`
`
`
`SC analyzes
`Frame Header.
`
`
`
`
`SP registers
`Exchange informa-
`tion in ET.
`
`
`
`
`
` SC analyzes
`20005
`Frame Payload.
`
`
`(LU N,CDB).
`
`
` SP searches DCT,
`
`
`acquires matching
`20006
`display subset No.,
`
`
`and LUN,calculates
`LBA, and reports to
`SC.
`
`
`
`20007
`
`
`
`
`SC converts the
`Frame Header.
`Frame Payload
`based on conversion
`information acquired
`frame SP.
`
`
`
`SPG generates an
`SPacket added with
`
`
`20008
`an expansion head-
`
`
`er to the conversion
`command frame
`and sends it to the
`crossbar switch.
`
`
`
`
`
`
`SP updates
`Exchange infor—
`mation (RX_|D).
`
`
`
`
`
` SC converts
`
`
`Frame Header
`20015
`based on conver-
`
`
`sion information
`acquired frame
`
`SP.
`
`
`
`
`lC sends
`FCP_XFER_RDY 20016
`or FCP_DATA to
`host.
`
`
`
`
`
`
`20001
`
`20002
`
`
`
`
`20003
`
`
`
`
`
`Data transfer
`
`
`setup end frame,
`data frame
`receive
`
`processing.
`
`
`SPG receives
`
`
`Spacket,
`20011
`
`
`removes expan-
`
`
`sion header and
`recreates frame.
`
`
`
`
`SP searches ET
`and acquires
`Exchange
`information.
`
`20012
`
`
`
`Is
`FCP_XF?ER_RDY
`
`
`
`S_tatus frame.
`receive processmg
`
`SPG receives
`SPacket, removes
`expansion header
`and recreates frame.
`
`
`
`
`20021
`
`
`SP searches ET and
`acquires Exchange
`information.
`
`20022
`
`
`
`
`
`SC converts Frame
`
`Header based on
`20023
`conversion informa-
`
`tion acquired frame
`SP
`
`IC sends FCP_RSP
`to host.
`
`20024
`
`
`
`
`
`
`
`SP deletes the
`Exchange informa-
`
`tion of this command
`frame the ET.
`
`20025
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 13
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 13
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 11 0f 22
`
`US 6,701,410 B2
`
`FIG.14
`
`30
`
`E
`_|
`
`20
`
`30
`
`S
`
`=L|=
`
`I
`
`20 i
`
`20
`
`30
`
`30
`
`Host
`
`-I
`
`20
`
`-.I_.!_I
`
`-
`I I
`
`i
`
`Diskarray
`Switch#O
`
`Diskarray
`Switch#1
`
`Diskarray
`Switch#2
`
`Diskarray
`Switch#3
`
`2040
`Inter—
`cluster
`|/F
`
`2040
`Inter-
`cluster
`IIF
`
`2040
`Inter-
`cluster
`|/F
`
`Diskarray
`
`subset
`
`ll
`
`Diskarray
`
`subset
`
`II
`
`Diskarray
`
`subset
`
`II
`
`Diskarray
`
`|LL_
`subset
`
`1O
`
`10
`
`10
`
`10
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 14
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 14
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 12 0f 22
`
`US 6,701,410 B2
`
`FIG.15
`
`Diskarray system
`
`1O
`
`Diskarray
`subset
`#0
`
`Diskarray
`subset
`#1
`
`Diskarray
`subset
`#2
`
`Diskarray
`subset
`#3
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 15
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 15
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 13 0f 22
`
`US 6,701,410 B2
`
`2023
`
`To
`
`lC (Interface Controller)
`
`(FibreChannel
`protocol
`Controller)
`
`20233
`
`Controller]
`
`BUF
`[BUF fre]
`
`PEP
`
`[Protocol
`Exchanging
`Processor]
`
`SPC
`[Scsi
`Protocol
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 16
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 16
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 14 0f 22
`
`US 6,701,410 B2
`
`FIG.17
`
`#3
`
`Diskarray
`subset
`#0
`
`Diskarray
`subset
`#1
`
`Diskarray
`subset
`#2
`
`Diskarray system
`
`Diskarray
`subset
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 17
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 17
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 15 0f 22
`
`US 6,701,410 132
`
`moom
`
`Qmfiow
`
`85IIuoom
`Ig9%?@9202
`
`_m0wN955
`
`60m25I
`
`262
`
`-coo
`
`mason:
`
`529585
`
`mama$22«omzmm
`
`
`
`
`
`
`
`05m...5:929:50cozowccooo_mo._
`
`28:00
`
`5me
`
`>95me
`
`2.0."—
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 18
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 18
`
`

`

`U.S. Patent
`
`aM
`
`m
`
`M
`
`US 6,701,410 132
`
`920."—
`
`
`m,IIIII“mo:m.ammmom<20n6“.EmmmExmum920101
`mUs@5wam50:26>93me
`
` n.ofimmnzm>96wa
`
`“.58...
`
`Us22.935
`
`
`
`
`
`
`
`131.1855110“.Emnmmmxnmo“.gunmen
`
`
`
`Ffiomnzm5.6me
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 19
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 19
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 17 0f 22
`
`US 6,701,410 B2
`
`FIG.20A
`
`FIG.ZOB
`
`Command frame
`_
`_
`receive processmg
`
`IC receives FCP_CMD from
`host
`
`21001
`
`SC stores FCP_CMD in FB.
`Executes CRC check.
`
`21002
`
`21003
`
`21004
`
`21005
`
`21006
`
`SC analyzes Frame Header.
`
`SP reisters Exchange
`information in ET.
`
`SC analyzes Frame
`Payload.(LUN,CDB).
`
`SP searches DCT, specifies a
`
`
`mirror-LU. Acquires matching
`display subset No. and LUN,
`
`
`calculates LAB, and reports to
`
`SC.
`
`SC makes duplicate of
`FCP_CMD.
`
`21007
`
`so converts FCP_CMD for
`two LU (main/secondary).
`
`21008
`
`
`
`SPG generates SP packets
`
`for, and loads each of the two
`
`
`frames, and sends them to
`
`
`the matching diskarray
`
`subset.
`
`21009
`
`END
`
`
`
`Data transfer setup end
`frame, data frame
`
`receive processing.
`
`
`SPG receives Spacket,
`removes expansion header
`
`and recreates frame.
`
`SP searcher ET and acquires
`Exchange information.
`
`SP updates Exchange
`information (RX_|D).
`
`
`
`
`
`
`
`
`SP receives
`FCP—XFER—RDY
`
`from b°th
`main/secondary
`
`°fI7-U
`'
`
`SC converts Frame Header for
`the frame from main LU based
`on conversion information
`acquired from SP.
`
`21015
`
`
`
`
`
`IC sends FCP_XFER_RDY
`to host.
`
`
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 20
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 20
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 18 0f 22
`
`US 6,701,410 132
`
`FlG.20D
`
`receive processing.
`
`
`
`
`
`SPG receives SPacket,
`removes expansion header
`
`and recreates frame.
`
`SP searchers ET and acquires
`Exchange information.
`
`main/secondary LU
`
`main LU based on conversion
`
`information acquired from SP.
`
`
`
`21044
`
`FlG.20C
`
`Command frame
`
`receive processing
`
`lC receives FCP_DATA
`from host.
`
`21031
`
`SC stores FCP_DATA in F8.
`Executes CRC check.
`
`21032
`
`SC analyzes Frame Header.
`
`SP searches ET and acqires
`Exchange information.
`
`SC makes duplicate
`of FCP_DATA.
`
`
`
`SC converts each FCP_DATA
`frame header for the two LU
`
`
`
`(main/secondary).
`
`
`
`
`
`SPG generates SP packets
`
`for, and loads each of the two
`frames, and send them to the
`matching diskarray subset.
`
`
`
`
`
`
`
`21033
`
`21034
`
`21035
`
`21036
`
`21037
`
`
`
`Received status from
` SC converts Frame Header of
`
`
`
`
`Delete frame from sacondary
`LU.
`
`IC sends FCP_RSP to host.
`
`
`
`
`
`END
`
`SP deletes the Exchange
`information of this command
`from ET.
`
`
`
`
`
`
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 21
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 21
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 19 0f 22
`
`US 6,701,410 132
`
`
`
`
`
`ho@8QO$9.23he30QO$9221
`
`Fwd."—
`
`
`
`*0wmomam30%}.
`
`
`
`>93memimmnsw>mtmxm€
` mfimmnsm
`
`
`
`
`
`rimmozm>96meoEmmnsw>93meEmzm>m>95me
`
`“mo;E0::03mm
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 22
`
`
`
`
`
`Fo$0QO$222we$223$9291
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 22
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 20 0f 22
`
`US 6,701,410 B2
`
`FIG.22
`
`Executes CRC check.
`
`so analyzes Frame Payload. (LUN,CDB).
`
`SP searches DCT,specifies a striping-LU.
`Acquires matching display subset No. and
`LUN, calculates LBA, and reports to SC.
`
`
`
`SC converts the Frame Header, Frame
`Payload based on conversion information
`acquired from SP.
`
`22005
`
`22005
`
`22007
`
`SP converts Exchange information and stores it.
`
`22008
`
`SPG generates an SPacket added with an
`expansion header to the conversion command
`
`frame and sends it to the crossber switch.
`
`22009
`
`END
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 23
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 23
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 21 0f 22
`
`US 6,701,410 132
`
`5296
`
`5:263:935
`
`
`
`305?.2053.
`
`
`
`>36me>33me
`
`
`
`F3083.ofimmnam
`
`3:935 Nm
`Boo3838£53303033
`”N.G—l5
`
`
`
`2923.>mtmxm5>35me
`
`2:3
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 24
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 24
`
`

`

`US. Patent
`
`Mar. 2, 2004
`
`Sheet 22 0f 22
`
`US 6,701,410 B2
`
`FIG.24
`
`Diskarray System
`
`Diskarray subset
`
`02
`
`PROBLEM
`21
`Diskarray / OCCURS
`|/F
`
`1
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 25
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 25
`
`

`

`US 6,701,410 B2
`
`1
`STORAGE SYSTEM INCLUDING A SWITCH
`
`This is a continuation application of US. Ser. No.
`09/468,327, filed on Dec. 21, 1999.
`
`BACKGROUND OF THE INVENTION
`
`This invention relates to a disk control system for con-
`trolling a plurality of disk devices and relates in particular to
`a method for improving the high speed operation of the disk
`control system, achieving a lower cost and improving the
`cost performance.
`A diskarray system for controlling a plurality of disk
`devices is utilized as a storage system in computers. A
`diskarray system is for instance disclosed in “A Case for
`Redundant Arrays of Inexpensive Disks (RAID)”; In Proc.
`ACM SIGMOD, June 1988 (Issued by Cal. State Univ.
`Berkeley). This diskarray operates a plurality of disk sys-
`tems in parallel and is a technique that achieves high speed
`operation compared to storage systems utilizing disks as
`single devices.
`Amethod using the fabric of a fiber channel is a technique
`for mutually connecting a plurality of hosts with a plurality
`of diskarray systems. A computer system using this tech-
`nique is disclosed for instance in “Serial SCSI Finally
`Arrives on the market” of Nikkei Electronics, P. 79, Jul. 3,
`1995 (No. 639) as shown in FIG. 3. In the computer system
`disclosed here, a plurality of host computers (hereafter
`simply called hosts) and a plurality of diskarray systems are
`respectively connected to a fabric device by way of fiber
`channels. The fabric device is a switch for the fiber channels
`
`and performs transfer path connections between the desired
`devices. The fabric device is transparent
`to (or passes)
`“frame” transfers which are packets on the fiber channel.
`The host and diskarray system communicate between two
`points without recognizing he fabric device.
`SUMMARY OF THE INVENTION
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`In diskarray systems of the conventional art, when the
`number of disk devices were increased in order to increase
`
`40
`
`the storage capacity and achieving a controller having high
`performance matching the number of disk units was
`attempted, the internal controller buses were found to have
`only limited performance and likewise, the processor per-
`forming transfer control was also found to have only limited
`performance.
`In order to deal with these problems,
`the
`internal buses were expanded and the number of processors
`was increased. However, attempting to solve the problem in
`this manner made the controller structure more complex due
`to the control required for a greater number of buses and
`caused increased overhead and complicated software control
`due to non-exclusive control of data shared between
`
`processors, etc. The rise in cost consequently became
`extremely high and performance reached its limits so that
`cost performance was unsatisfactory. Though the cost for
`this kind could be justified in terms of performance in a large
`scale system, in systems not or such a large scale the cost did
`not match performance, expandability was limited and the
`development period and development costs increased
`The overall system storage capacity and performance can
`be increased by connecting a plurality of diskarray systems
`in parallel with a fabric device. However, in this method,
`there is absolutely no connection between the diskarray
`systems, and access concentrated on a particular diskarray
`system cannot be distributed among the other devices so that
`high performance cannot be achieved in actual operation.
`Also, the capacity of a logical disk device (hereafter logic
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`
`unit) as seen from the host is limited to the capacity of one
`diskarray system so that a high capacity logic unit cannot be
`achieved.
`
`In an attempt to improve diskarray system reliability, a
`diskarray system can be comprised of a mirror structure
`where, in two diskarray systems, the host unit has a mirror-
`ing function. However, this method requires overhead due to
`control required of the mirroring by the host and also has the
`problem that performance is limited. This method also
`increases the load that the system administrator must super-
`vise since many diskarray systems are present inside the
`system. The maintenance costs thus increase since a large
`number of maintenance personnel must be hired and main-
`tenance fees must be paid for each unit. The plurality of
`diskarray systems and fabric devices are further all autono-
`mous devices so that the settings must be made by different
`methods according to the respective device, creating the
`problem that operating costs increase along with a large
`increase in operating time and system administrator training
`time, etc.
`In order to resolve these problems with the related art, this
`invention has the object of providing a disk storage system
`capable of being structured according to the scale and
`requirements of the computer system, and a disk storage
`system that responds easily to needs for high reliability and
`future expansion.
`The disk storage system of this invention contains a
`storage device having a record medium for holding the data,
`a plurality of storage sub-systems having a controller for
`controlling the storage device, a first interface node coupled
`to a computer using the data stored in the plurality of storage
`sub-systems, a plurality of second interface nodes connected
`to any or one of the storage sub-systems, a switch connect-
`ing between a first interface node and a plurality of second
`interface nodes to perform frame transfer between a first
`interface node and a plurality of second interface nodes
`based on node address information added to the frame.
`
`The first interface node preferably has a configuration
`table to store structural information for the memory storage
`system and a processing unit to analyze the applicable frame
`in response to the frame sent from the computer, converts
`information relating to the transfer destination of that frame
`based on structural information held in the configuration
`table, and transfers that frame to the switch. Further, when
`transmitting a frame, the first interface node adds the node
`address information about the node that must receive the
`frame, to that frame. A second interface node then removes
`the node address information from the frame that was
`received, recreates the frame and transfers that frame to the
`desired storage sub-system.
`the disk storage
`In the embodiment of this invention,
`system has a managing processor connecting to the switch.
`The managing processor sets the structural information in
`the configuration table of each node according to the opera-
`tor’s instructions. Information for limiting access from the
`computer is contained in this structural information.
`In another embodiment of this invention, the first inter-
`face node replies to the command frame sent from the
`computer instructing the writing of data, makes copies of
`that command frame and the following data frames, adds
`different nodes address information to each frame so the
`
`received frame and the copied command frames will be sent
`to the different respective nodes and sends these frames to
`the switch.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a block diagram showing the structure of the
`computer system of the first embodiment of this invention.
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 26
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 26
`
`

`

`US 6,701,410 B2
`
`3
`FIG. 2 is block diagram of the diskarray subset of the first
`embodiment.
`
`FIG. 3 is block diagram of diskarray switch of the first
`embodiment.
`
`FIG. 4 is a block diagram of the crossbar switch of the
`diskarray switch of the first embodiment.
`FIG. 5 is block diagram of the host I/F node for he
`diskarray switch of the first embodiment.
`FIG. 6A is sample diskarray system configuration table
`FIG. 6B is sample diskarray system configuration table.
`FIG. 7 is a block diagram of the frame of the fiber channel.
`FIG. 8 is a block diagram of the frame header of the fiber
`channel.
`
`FIG. 9 is a block diagram of the frame payload of the fiber
`channel.
`
`FIG. 10 is a model view showing the sequence of frames
`sent by way of the fiber channel during read operation from
`the host
`
`FIG. 11 is a model view showing the interactive relation-
`ship of the host-LU, the LU for each diskarray subset, as
`well as each diskarray unit.
`FIG. 12 is a block diagram of the S packet.
`FIGS. 13A through 13C are fiowcharts of the processing
`in the host I/F node during write processing.
`FIG. 14 is a block diagram showing a plurality of diskar-
`ray switches in a cluster-connected diskarray system.
`FIG. 15 is a block diagram of the computer system of the
`second embodiment of this invention.
`
`FIG. 16 is a block diagram of the diskarray switch IC of
`the fourth embodiment of this invention
`
`10
`
`15
`
`20
`
`25
`
`30
`
`FIG. 17 is a block diagram of the computer system of the
`fifth embodiment of this invention.
`
`35
`
`FIG. 18 is a screen configuration view showing a typical
`display of the logic connection structure.
`FIG. 19 is a model diagram showing the frame sequence
`in the sixth embodiment of this invention.
`
`FIGS. 20A through 20D are flowcharts showing the
`processing on the host I/F node during the mirroring write
`processing in the sixth embodiment of this invention.
`FIG. 21 is an address spatial diagram of the diskarray
`system for the seventh embodiment of this invention.
`FIG. 22 is a flowchart showing the processing in the host
`I/F node of the seventh embodiment of this invention.
`
`FIG. 23 is a block diagram of the disaster recovery system
`of the eight embodiment of this invention
`FIG. 24 is a descriptive view of the alternative path setup.
`
`DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`
`(First Embodiment)
`FIG. 1 is a block diagram showing the structure of the
`computer system of the first embodiment of this invention.
`In the figure, reference numeral 1 denotes a diskarray
`system, and 30 is the (host) computer connected to the
`diskarray system. The diskarray system 1 contains a diskar-
`ray subset 10, a diskarray switch 20 and a diskarray system
`configuration manager 70 for handling the configuration of
`the overall diskarray system. The diskarray system 1 further
`has a communication interface (communication I/F) 80
`between the diskarray switch 20 and the diskarray system
`configuration manager 70, and also between the diskarray
`subset 10 and the diskarray system configuration manager
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`4
`70. A host 30 and the diskarray system 1 are connected by
`a host interface (host I/F)) 31. The host I/F 31 is connected
`to the diskarray switches 20 of the diskarray system 1. The
`diskarray switch 20 and the diskarray subset 10 inside the
`diskarray system 1 are connected by the diskarray interface
`(diskarray I/F 21).
`The hosts 30 and the diskarray subsets 10 are shown as
`four units each however this number is optional and is not
`limited. The hosts 30 and the diskarray subsets 10 may also
`be provided in different numbers of units. The diskarray
`switches 20 in this embodiment are duplexed as shown in the
`drawing. Each host 30 and each diskarray subset 10 are
`connected to both of the duplexed diskarray switches 20 by
`the respective host I/F 31 and a diskarray I/F 21. Thus even
`if one of the diskarray switches 20, the host I/F 31 or the
`diskarray I/F 21 is broken, the other diskarray switches 20,
`the host I/F 31 or the diskarray I/F 21 can be utilized to allow
`access from the host 30 to the diskarray system 1, and a high
`amount of usage can be achieved. However, this kind of
`duplication or duplexing is not always necessary and is
`selectable according to the level of reliability required by the
`system.
`FIG. 2 is block diagram of a diskarray subset 10 of the
`first embodiment. The reference numeral 101 denotes the
`
`host adapter for interpreting the commands from the host
`system (host 10), executing the cache hit-miss decision and
`controlling the data transfer between the host system and the
`cache. The reference numeral 102 denotes the cache
`
`memory/shared memory that comprises the cache memory
`for performing high speed disk data access and a shared
`memory for storing data shared by the host adapters 101 and
`the lower adapters 103. The reference numeral 104 denotes
`a plurality of disk units stored inside the diskarray subset 10.
`Reference numeral 103 is the lower adapter for controlling
`a disk unit 104 and controlling the transfer of data between
`the disk unit 104 and the caches. Reference numeral 106 is
`
`the diskarray subset configuration manager to perform com-
`munications between the diskarray system configuration
`manager 70 and the overall diskarray system 1, and also
`manage the structural parameter settings and reporting of
`trouble information, etc. The host adapter 101, the cache
`memory/shared memory 102, and the lower adapter 103 are
`respectively duplexed here. The reason for duplexing is to
`attain a high degree of utilization, just the same as with the
`diskarray switch 20 and is not always required. Each disk
`unit 104 is also controllable from any of the duplexed lower
`adapters 103. In this embodiment,
`the cache and shared
`memories jointly utilize the same memory means in view of
`the need of low costs however the caches and shared
`memories can of course be isolated from each other.
`
`The host adapter 101 comprises an host MPU1010 to
`execute control of the adapter 101, an host system or in other
`words a diskarray I/F controller 1011 to control the diskarray
`switches 20 and the connecting I/F which is the diskarray I/F
`21, and an host bus 1012 to perform sir communications and
`data transfer between the cache memory/shared memory 102
`and host MPU1010 and the diskarray I/F controller 1011.
`The figure shows one diskarray I/F controller 1011 for each
`host adapter 101 however a plurality of diskarray I/F con-
`trollers 1011 can also be provided for each one host adapter.
`The lower adapter 103 contains a lower MPU103 to
`execute control of the lower adapter 103, a disk I/F control-
`ler 1031 to control the disk 104 and interface which is the
`
`disk I/F, and a lower bus 1032 to perform communications
`and data transfer between the cache memory/shared memory
`102 and host MPU1030 and the diskarray I/F controller
`1031. The figure shows four diskarray I/F controllers 1031
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 27
`
`UNIFIED PATENTS EXHIBIT 1004
`PAGE 27
`
`

`

`US 6,701,410 B2
`
`5
`for each lower adapter 103 however the number of diskarray
`I/F controllers is optional and can be changed according to
`the diskarray configuration and the number of disks that are
`connected.
`
`FIG. 3 is block diagram of the diskarray switch 20 of the
`first embodiment. The diskarray switch 20 contains a Man-
`aging Processor (MP) which is a processor for performing
`management and control of the entire diskarray switch, a
`crossbar switch 201 for comprising n><n mutual switch paths,
`a diskarray I/F node 202 formed for each diskarray I/F 21,
`a host I/F node 203 formed for each host I/F 31, and a
`communication controller 204 for performing communica-
`tions with the diskarray system configuration manager 70.
`The reference numeral 2020 denotes a path for connecting
`the diskarray I/F node 202 with the crossbar switch 201, a
`path 2030 connects the host I/F node 203 and the crossbar
`switch 201, a path 2040 connects with the other diskarray
`switch 20 and other I/F for forming clusters, a path 2050
`connects the MP200 with a crossbar switch 201
`
`FIG. 4 is a block diagram showing the structure of the
`crossbar switch 201. A port 2010 is a switching part (SWP)
`for connecting the paths 2020, 2030, 2050 and cluster I/F
`2040 to the crossbar switch 201. The switching ports 2010
`all have the same structure and perform switching control of
`the transfer paths to other SWP from a particular SWP The
`figure shows on SWP however identical transfer paths exist
`between all the SWP.
`
`FIG. 5 is a block diagram showing the structure of the host
`I/F node 203. In this embodiment, use of a fiber channel is
`assumed for both the diskarray I/F 21 and the host I/F 31 in
`order to provide a specific description. The host I/F 31 and
`the diskarray I/F 21 can of course be implemented with
`interfaces other than fiber channels. By utilizing an identical
`interface, the host I/F node 203 and the diskarray I/F node
`202 can both have the same structure. In this embodiment,
`the diskarray I/F node 202 has the same structure as the host
`I/F node 203 as shown in the figure. Hereafter, the host I/F
`node 203 will be described by using an example. A Search-
`ing Processor (SP) searches for what frame to connect the
`fiber channel frame (hereafter simply called frame) to, an
`Interface Controller (IC) 2023 transmits and receives the
`frames with the host 30 (the diskarray subset 10 when using
`the diskarray I/F node 202), a Switching Controller (SC)
`2022 performs conversion based on results found by the
`SP2021 for frames received by the IC2023, a Switching
`Packet Generator (SPG) 2024 packetizes the frame con-
`verted by the SC2021 into a configuration that can pass the
`crossbar switch 201 to transfer to other nodes, a Frame
`Buffer (FB) 2025 temporarily stores the received frame, an
`Exchange Table (ET) 2026 supervises use of exchange
`numbers for identifying a plurality of frame strings corre-
`sponding to a disk access request command (hereafter
`simply called command) from one host, and a Diskarray
`Configuration Table

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket