`
`Exhibit A
`
`Exhibit A
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page A
`
`
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page A
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`
`U.S. Patent No. 6,516,442
`
`Accused Products: NetApp MetroCluster.
`Claims
`Exemplary Evidence of Infringement
`1. A shared-memory multi-
`The preamble is not limiting. To the extent the preamble is deemed limiting, the Accused Products are a
`shared-memory multi-processor system.
`processor system comprising:
`
`“shared memory”: The Accused Product shares memory via, for example, fiber channel (“FC”) or
`Ethernet connections through a shared storage fabric. For example, “NetApp MetroCluster is designed
`for organizations that require continuous protection of their storage infrastructure and mission-critical
`business applications”1 This is achieved by creating different clusters at different sites, and connecting
`the clusters “by two separate networks that provide the replication transport. The cluster peering
`network is an IP network that is used to replicate cluster configuration information between the sites.
`The shared storage fabric is an FC connection and is used for storage and NVRAM synchronous
`replication between the two clusters.”1 The fabric connection between the clusters ensures that “All
`storage is visible to all controllers through the shared storage fabric.”2
`
`“multiprocessor”: The Accused Products comprise one or more multi-core processors, each of which
`are inherently multiprocessing. For example, each “storage controller contains one or more multi-core
`CPUs. These physical CPU cores are the primary compute resource available to Data ONTAP for
`processing work.”3 For example, the NetApp FAS8000 Series, its latest enterprise platform for shared
`infrastructure, with at least three models: FAS8020, FAS8040, and FAS8060 have multi-core CPUs.
`“The 3U form factor FAS8020 (codenamed: "Buell") is targeted towards mid-size enterprise customers
`with mixed workloads. Each Processor Control Module (PCM) includes a single-socket, 2.0 GHz Intel
`E5-2620 “Sandy Bridge-EP” processor with 6 cores (12 per HA pair) ...”4 “Each FAS8040 Processor
`Control Module (PCM) includes a single- socket, 2.1 GHz Intel E5-2658 “Sandy Bridge-EP” processor
`with 8 cores (16 per HA pair) .... Each FAS8060 PCM includes dual-socket, 2.1 GHz Intel E5-2658
`
`
`
`
`1 IV_NETAPP_000261 at 7 (emphasis added).
`2 IV_NETAPP_000261 at 7 (emphasis added).
`3 IV_NETAPP_000431 at 1 (emphasis added).
`4 IV_NETAPP_000439 at 1 (emphasis added).
`
`
`
`1
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 1 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`“Sandy Bridge-EP” processors with a total of 16 cores (32 per HA pair) ...”5
`
`As shown in Figure 1, NetApp MetroCluster is a multiple core (“multiprocessor”) global memory
`(“shared memory”) system.
`
`
`[a] a switch fabric configured to
`switch packets containing data;
`
`Figure 16
`The Accused Products comprise a switch fabric configured to switch packets containing data.
`
`
`
`
`
`5 IV_NETAPP_000439 at 2 (emphasis added).
`6 IV_NETAPP_000321 at 13.
`
`
`2
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 2 of 16
`
`
`
`
`
`[b] a plurality of channels
`configured to transfer the packets;
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`“switch fabric”: The exemplary NetApp fabric MetroCluster configuration incorporates hardware
`components that establish a switch fabric configured to switch packets containing data. A sample
`configuration is shown in Figure 1 above and includes, for example, fiber channel switches and their
`associated cabling.7 Switches and connection controllers in the Accused Products must be “supplied by
`NetApp”8
`
`“switch”: NetApp’s accused products include switches use to establish fabrics. For example, “[f]abric
`MetroCluster implements two fabrics (one for redundancy) across sites. Each fabric consists of two
`switches (one on each site), so therefore four switches per MetroCluster configuration.”9
`
`“configured to switch packets containing data”: The Fabric Cluster thus formed is configured to switch
`packets containing data between storage controller and disk shelves at one site and also on the other site
`of the cluster. “The two clusters and sites are connected by two separate networks that provide the
`replication transport. The cluster peering network is an IP network that is used to replicate cluster
`configuration information between the sites. The shared storage fabric is an FC connection and is used
`for storage and NVRAM synchronous replication between the two clusters. All storage is visible to all
`controllers through the shared storage fabric.”10
`The Accused Products comprise a plurality of channels configured to transfer the packets.
`
`For example, “Data ONTAP interacts with other physical hardware such as Ethernet ports, FC ports,
`disks, and NVRAM,”11 all of which are channels configured to transfer packets . Also as shown in
`Figure 1, a MetroCluster consists of Onboard fiber ports, Inter Switch links, Cluster Interconnect, Shelf
`inter-connect, and other interconnects which act as channels configured to transfer packets.
`
`As shown in following figure, figure 2 “the connectivity between Data ONTAP systems and disks, the
`HBA ports 1a through 1d are used for connectivity with disks through the FC-to-SAS bridges”12
`
`
`
`7 IV_NETAPP_000321 at 11 (emphasis added).
`8 IV_NETAPP_000321 at 62.
`9 IV_NETAPP_000321] at 28 (emphasis added).
`10 IV_NETAPP_000261 at 7.
`11 IV_NETAPP_000431 at 1.
`12 IV_NETAPP_000079 at 1.
`
`
`3
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 3 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`[c] a plurality of switch interfaces
`configured to exchange the
`packets with the switch fabric,
`exchange the packets over the
`channels, and perform error
`correction of the data in the packets
`exchanged over the channels;
`
`
`
`Figure 213
`The Accused Products comprise a plurality of switch interfaces configured to exchange the
`packets with the switch fabric, exchange the packets over the channels, and perform error correction of
`the data in the packets exchanged over the channels.
`
`“plurality of switch interfaces configured to exchange the packets with the switch fabric, exchange the
`packets over the channels”: NetApp MetroCluster includes dedicated switches for exchanging packets
`over channels. Controllers and storage connect to switches directly via, for example, fiber channel.
`“Fabric MetroCluster implements two fabrics (one for redundancy) across sites. Each fabric consists of
`two switches (one on each site), so therefore four switches per MetroCluster configuration. … The
`controllers and storage connect to the switches directly (controllers do not directly attach to storage as
`in configurations other than MetroCluster), and the switches cannot be shared by traffic other than
`
`
`13 IV_NETAPP_000079 at 1.
`
`
`4
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 4 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`[d] a plurality of microprocessor
`interfaces configured to exchange
`the data with a plurality of
`microprocessors, exchange the
`packets with the switch interfaces
`over the channels, and perform
`error correction of the data in the
`packets exchanged over the
`channels; and
`
`MetroCluster.”14
`
`“perform error correction of the data in the packets exchanged over the channels”: On information and
`belief, error correction is implemented via the switches incorporated into the Accused Products, such as
`those in the NetApp MetroCluster configuration.15
`
`The Accused Products comprise a plurality of microprocessor interfaces configured to exchange the
`data with a plurality of microprocessors, exchange the packets with the switch interfaces over the
`channels, and perform error correction of the data in the packets exchanged over the channels.
`
`
`“a plurality of microprocessor interfaces configured to exchange the data with a plurality of
`microprocessors”: For example, “Each storage controller contains one or more multi-core CPUs,”16
`which are microprocessors containing a plurality of processing cores. “These physical CPU cores are
`the primary compute resource available to Data ONTAP for processing work.”17 These
`microprocessors are associated with microprocessor interfaces configured to exchange data between the
`microprocessors themselves, and the switch interfaces over the channels. For example, every storage
`controller has a converged network adapter (“CNA”) for connecting to FC [Fiber Channel] - based
`storage area networks (“SANs”) and Ethernet-based local area networks (“LANs”). “If you are using
`FCoE [Fiber Channel-over-Ethernet] on your Ethernet infrastructure, FCoE must be configured at the
`switch level before your FC service can run over the existing Ethernet infrastructure … You must
`install a Unified Target Adapter (UTA) on your storage system and a converged network adapter
`(CNA) on your host. These adapters are required for running FCoE traffic over your Ethernet
`
`
`14 IV_NETAPP_000321 at 28 (emphases added).
`15 See, e.g., IV_NETAPP_001210NetApp provides a group of diagnostics that tests the functioning of the FC functionality of the converged
`network adapters (“CNAs”) in the system, such as “Internal loopback tests,” and tests of frame CRC and length errors in FW, and tests of the
`data integrity in the host. This indicates that CNAs incorporated in the Accused Products perform CRC and other checks for data integrity
`and this diagnostic test assures that things are functioning as they should.
`16 IV_NETAPP_000431 at 1 (emphasis added).
`17 IV_NETAPP_000431 at 1 (emphasis added).
`
`
`5
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 5 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`[e] a memory interface configured
`to exchange the data with a
`memory device, exchange the
`packets with the switch interfaces
`over the channels, and perform
`error correction of the data in the
`packets exchanged over the
`channels.
`
`
`network.”18
`
`“perform error correction of the data in the packets exchanged over the channels”: On information and
`belief, error correction is implemented via the CANs incorporated into the accused products.19
`
`The Accused Products comprise a memory interface configured to exchange the data with a memory
`device, exchange the packets with the switch interfaces over the channels, and perform error correction
`of the data in the packets exchanged over the channels.
`“a memory interface configured to exchange the data with a memory device, exchange the packets with
`the switch interfaces over the channels”: For example, NetApp offers data storage systems with native
`FCoE support, “FCoE - combining the Fibre Channel protocol and an enhanced 10-Gigabit Ethernet
`physical transport - expands options for SAN connectivity and networking.” “NetApp Unified Connect
`supports FC, FCoE, iSCSI, NFS, and CIFS protocols concurrently over a shared network port using the
`NetApp unified target adapter.”20 These adapters interface with the memory, and are configured to
`exchange packets between the memory device and the switch interface.
`
`“perform error correction of the data in the packets exchanged over the channels”: For example, as
`defined in the Fibre Channel - Backbone - 5 (FC-BB-5) standard, NetApp implements various
`procedures for correcting errors and repairing or handling corrupted data.
`“5.6.4 Procedures for error detection recovery
`5.6.4.1 Procedures for handling invalid FC frames
`Data corruption is detected at two different levels, TCP checksum and FC frame encapsulation
`errors. Data corruption detected at the TCP level shall be recovered via TCP data recovery
`mechanisms. The recovery for FC frame errors is described below. The TCP and FC frame recovery
`operations are performed independently.
`Fibre Channel frame errors and the expected resolution of those errors are described in RFC 3821
`and summarized below:
`
`
`18 IV_NETAPP_000405 at 1 (emphasis added).
`19 IV_NETAPP_001202 NetApp provides commands to monitor flow control on the physical interfaces of Clustered Data in ONTAP 8.x,
`which is part of the Accused Products. Such commands return CRC error count, which indicates that NetApp nodes implement cyclic
`redundancy check.
`20 IV_NETAPP_000419 at 1 (emphasis added).
`
`
`6
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 6 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`NOTE 10 – The behavior given below is that of the FCIP Entity.
`a) all incoming frames on the FC receiver port are verified for correct header, proper format, valid
`length and valid CRC. A frame having an incorrect header or CRC shall be discarded or processed
`in accordance with the rules for the particular type of FC_Port;
`b) all frames transmitted by the encapsulated frame tansmitter are valid FC encapsulations of valid
`FC frames with correct TCP check sums on the correct TCP/IP connection;
`c) the FC frames contained in incoming encapsulated frames on the encapsulated frame receiver
`port are verified for a valid header, proper content, proper SOF and EOF values, and valid
`length. FC frames that are not valid according to those checks are managed according to the
`following rules:
`A) the frame may be discarded; or
`B) the frame may be transmitted in whole or in part by the FC transmitter port and ended with
`an EOF indicating that the content of the frame is invalid; and
`d) if there is any discrepancy between statements in this subclause and RFC 3821, then RFC
`3821 shall prevail.
`5.6.4.2 Procedures for error recovery
`The FC Entity shall recover from events that the FCIP Entity is unable to handle, such as:
`a) loss of synchronization with FCIP frame headers from the encapsulated frame receiver portal
`requiring resetting the TCP connection; and
`b) recovering from FCIP frames that are discarded as a result of synchronization problems.”21
`
`The Accused Products comprise the shared-memory multi-processor system of claim 1 wherein the
`interfaces are configured to add error correction codes to the packets being transferred over the
`channels to check the error correction codes in the packets being received over the channels and to
`transfer a retry request if one of the packets being received has an error.
`
`“the interfaces are configured to add error correction codes to the packets being transferred over the
`channels to check the error correction codes in the packets being received over the channels”: For
`example Accused Products implement Cyclic Redundancy Check (“CRC”) over the switches to the
`check the integrity of all packets being transferred over the channels. CRC is an error detecting code
`wherein a “check value” is appended to all incoming packets. This “check value” is based, for
`example, on a mathematical operation (e.g., polynomial division) performed on the packet’s contents,
`
`2. The shared-memory multi-
`processor system of claim 1
`wherein the interfaces are
`configured to add error correction
`codes to the packets being
`transferred over the channels to
`check the error correction codes in
`the packets being received over the
`channels and to transfer a retry
`request if one of the packets being
`received has an error.
`
`
`21 IV_NETAPP_000081 at 69 (emphasis added).
`
`
`7
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 7 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`and is added to the packet as an “error correction code.” When the packet is later retrieved, the same
`mathematical operation is performed on the packet data, and the result is compared to the appended
`“check value” to determine if there has been any change (“error”) in the contents.
`
`As defined in the Fiber Channel - Backbone - 5 (FC-BB-5) standard, NetApp implements CRC.
`“The recovery for FC frame errors is described below. The TCP and FC frame recovery
`operations are performed independently.
`Fibre Channel frame errors and the expected resolution of those errors are described in RFC 3821
`and summarized below:
`NOTE 10 – The behavior given below is that of the FCIP Entity.
`a) all incoming frames on the FC receiver port are verified for correct header, proper format, valid
`length and valid CRC. A frame having an incorrect header or CRC shall be discarded or processed
`in accordance with the rules for the particular type of FC_Port;”22
`
`“transfer a retry request if one of the packets being received has an error”: For example, ONTAP,
`NetApp’s data management software, transfers a retry request in response to an erroneous packet. For
`example, “CRC errors exist in the data payload of frames that circulate through a Fibre Channel-
`Arbitrated Loop (FC-AL). The errors are detected by devices inside the loop when Data ONTAP writes
`to a disk-for example, a disk or ESH module. The error-detecting device generally is not responsible for
`the errors. When CRC errors are detected, Data ONTAP retransmits the affected data. Usually, these
`errors are transient, so a retransmission will clear the problem. Data ONTAP attempts to repath and
`retry the I/O operation three times. If three retransmissions are unsuccessful, Data ONTAP will fail the
`disk.”23
`The Accused Products comprise the shared-memory multi-processor system of claim 1 further
`comprising a bus interface configured to exchange the data with a bus, exchange the packets with the
`switch interfaces over the channels, and perform error correction of the data in the packets exchanged
`over the channels.
`
`“a bus interface configured to exchange the data with a bus, exchange the packets with the switch
`interfaces over the channels”: For example, FAS8000 series storage systems feature PCIe expansion
`
`8. The shared-memory multi-
`processor system of claim 1 further
`comprising a bus interface
`configured to exchange the data
`with a bus, exchange the packets
`with the switch interfaces over the
`channels, and perform error
`
`
`22 IV_NETAPP_000081 at 69 (emphasis added).
`23 IV_NETAPP_000437 at 1 (emphasis added).
`
`
`8
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 8 of 16
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`
`correction of the data in the packets
`exchanged over the channels.
`
`11. The shared-memory multi-
`processor system of claim 1 further
`comprising the microprocessors
`and the memory device.
`
`slots, which is a bus and bus interface permitting exchange of packets with switch interfaces over the
`channels. “The FAS8000 features a multiprocessor Intel® chip set and leverages high performance
`memory modules, NVRAM to accelerate and optimize writes, and an I/O-tuned PCIe gen3 architecture
`that maximizes application throughput.”24
`
`“perform error correction of the data in the packets exchanged over the channels”: For example,
`ONTAP, NetApp’s data management software, performs error correction of the data in the packets
`exchanged over the channels. As an example, “CRC errors exist in the data payload of frames that
`circulate through a Fibre Channel-Arbitrated Loop (FC-AL). The errors are detected by devices inside
`the loop when Data ONTAP writes to a disk-for example, a disk or ESH module.”25
`The Accused Products comprise the shared-memory multi-processor system of claim 1 further
`comprising the microprocessors and the memory device.
`
`“microprocessors”: The Accused Product contains multi core CPUs (“microprocessors”) that are
`primary compute resource. For example, and as described above in Exemplary Evidence of
`Infringement of Claim 1[d], “Each storage controller contains one or more multi-core CPUs. These
`physical CPU cores are the primary compute resource available to Data ONTAP for processing
`work.”26
`
`“memory device”: The Accused Products are storage systems that comprise at least one memory
`device. Different storage systems have different maximum disk drive capacity as shown in table 1
`below.
`
`
`Table 127
`The Accused Products comprise the shared-memory multi-processor system of claim 1 wherein the
`
`
`
`12. The shared-memory multi-
`
`24 IV_NETAPP_000400 at 1 (emphasis added).
`25 IV_NETAPP_000437 at 1 (emphasis added).
`26 IV_NETAPP_000431 at 1 (emphasis added).
`27 IV_NETAPP_000408 at 2.
`
`
`9
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 9 of 16
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`
`processor system of claim 1
`wherein the channels each
`comprise a point-to-point
`connection between a pair of the
`interfaces.
`
`channels each comprise a point-to-point connection between a pair of the interfaces.
`
`“the channels each comprise a point-to-point connection between a pair of the interfaces”: As seen
`Tables 2 and 3 below, exemplary onboard I/O modules support point-to-point Fiber Channel
`connections.
`
`Table 228
`
`
`
`
`28 IV_NETAPP_000408 at 2 & 3.
`
`
`10
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 10 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`24. A method of operating a
`shared-memory multi-processor
`system, the method comprising:
`
`
`Table 329
`The preamble is not limiting. To the extent the preamble is limiting, the Accused Products are a method
`of operating a shared-memory multi-processor system.
`
`“shared memory”: The Accused Product shares memory via, for example, fiber channel (“FC”) or
`Ethernet connections through a shared storage fabric. For example, “NetApp MetroCluster is designed
`for organizations that require continuous protection of their storage infrastructure and mission-critical
`business applications”30 This is achieved by creating different clusters at different sites, and connecting
`the clusters “by two separate networks that provide the replication transport. The cluster peering
`network is an IP network that is used to replicate cluster configuration information between the sites.
`The shared storage fabric is an FC connection and is used for storage and NVRAM synchronous
`replication between the two clusters.”1 The fabric connection between the clusters ensures that “All
`storage is visible to all controllers through the shared storage fabric.”31
`
`
`29 IV_NETAPP_000408 at 5 & 6.
`30 IV_NETAPP_000261 at 7 (emphasis added).
`31 IV_NETAPP_000261 at 7 (emphasis added).
`
`
`11
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 11 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`
`“multiprocessor”: The Accused Products comprise one or more multi-core processors, each of which
`are inherently multiprocessing. For example, each “storage controller contains one or more multi-core
`CPUs. These physical CPU cores are the primary compute resource available to Data ONTAP for
`processing work.”32 For example, the NetApp FAS8000 Series, its latest enterprise platform for shared
`infrastructure, with at least three models: FAS8020, FAS8040, and FAS8060 have multi-core CPUs.
`“The 3U form factor FAS8020 (codenamed: "Buell") is targeted towards mid-size enterprise customers
`with mixed workloads. Each Processor Control Module (PCM) includes a single-socket, 2.0 GHz Intel
`E5-2620 “Sandy Bridge-EP” processor with 6 cores (12 per HA pair) ...”33 “Each FAS8040 Processor
`Control Module (PCM) includes a single- socket, 2.1 GHz Intel E5-2658 “Sandy Bridge-EP” processor
`with 8 cores (16 per HA pair) .... Each FAS8060 PCM includes dual-socket, 2.1 GHz Intel E5-2658
`“Sandy Bridge-EP” processors with a total of 16 cores (32 per HA pair) ...”34
`
`As shown in Figure 1 in Exemplary Evidence of Infringement of Claim 1, NetApp MetroCluster is a
`multiple core (“multiprocessor”) global memory (“shared memory”) system.
`
`The Accused Products exchange data between a plurality of microprocessors and a plurality of
`microprocessor interfaces.
`
`For example, “[e]ach storage controller contains one or more multi-core CPUs,”35 which are
`microprocessors containing a plurality of processing cores. “These physical CPU cores are the primary
`compute resource available to Data ONTAP for processing work.”36 These microprocessors are
`associated with microprocessor interfaces configured to exchange data between the microprocessors
`themselves, and the switch interfaces over the channels. For example, every storage controller has a
`converged network adapter (“CAN”) for connecting to FC [Fiber Channel] - based storage area
`networks (“SANs”) and Ethernet-based local area networks (“LANs”). “If you are using FCoE [Fiber
`Channel-over-Ethernet] on your Ethernet infrastructure, FCoE must be configured at the switch level
`
`[a] exchanging data between a
`plurality of microprocessors and a
`plurality of microprocessor
`interfaces;
`
`
`32 IV_NETAPP_000431 at 1 (emphasis added).
`33 IV_NETAPP_000439 at 1 (emphasis added).
`34 IV_NETAPP_000439 at 2 (emphasis added).
`35 IV_NETAPP_000431 at 1 (emphasis added).
`36 IV_NETAPP_000431 at 1 (emphasis added).
`
`
`12
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 12 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`before your FC service can run over the existing Ethernet infrastructure … You must install a Unified
`Target Adapter (UTA) on your storage system and a converged network adapter (CNA) on your host.
`These adapters are required for running FCoE traffic over your Ethernet network.”37
`
`
`[b] exchanging packets containing
`the data between the
`microprocessor interfaces and a
`plurality of switch interfaces over
`channels;
`
`[c] exchanging the packets between
`the switch interfaces through a
`switch fabric;
`
`The Accused Products exchange packets containing the data between the microprocessor interfaces and
`a plurality of switch interfaces over channels.
`
`“exchange packets containing the data between the microprocessor interfaces and a plurality of switch
`interfaces”: As discussed in Exemplary Evidence of Infringement of Claim 1, the Accused Products
`utilize, for example, fiber channel (FC) controllers that exchange packets containing data between the
`microprocessor and memory interfaces, and the plurality of switch interfaces over the channels.
`
`“plurality of switch interfaces”: NetApp MetroCluster includes dedicated switches for exchanging
`packets over channels. Controllers and storage connect to switches directly via, for example, fiber
`channel. “Fabric MetroCluster implements two fabrics (one for redundancy) across sites. Each fabric
`consists of two switches (one on each site), so therefore four switches per MetroCluster configuration.
`… The controllers and storage connect to the switches directly (controllers do not directly attach to
`storage as in configurations other than MetroCluster), and the switches cannot be shared by traffic other
`than MetroCluster.”38
`The Accused Products exchange the packets between the switch interfaces through a switch fabric
`
`For example, the Accused Products include dedicated switches for exchanging packets over channels.
`Controllers and storage connect to switches directly via, for example, fiber channel. “Fabric
`MetroCluster implements two fabrics (one for redundancy) across sites. Each fabric consists of two
`
`
`37 IV_NETAPP_000405 at 1 (emphasis added).
`38 IV_NETAPP_000321 at 28 (emphases added).
`
`
`13
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 13 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`[d] exchanging the packets between
`the switch interfaces and a memory
`interface over the channels;
`
`[e] exchanging the data between
`the memory interface and a
`memory device; and
`
`switches (one on each site), so therefore four switches per MetroCluster configuration. … The
`controllers and storage connect to the switches directly (controllers do not directly attach to storage as
`in configurations other than MetroCluster), and the switches cannot be shared by traffic other than
`MetroCluster.”39
`The Accused Products exchange the packets between the switch interfaces and a memory interface over
`the channels.
`
`For example, NetApp offers data storage systems with native FCoE support, “FCoE - combining the
`Fibre Channel protocol and an enhanced 10-Gigabit Ethernet physical transport - expands options for
`SAN connectivity and networking.” “NetApp Unified Connect supports FC, FCoE, iSCSI, NFS, and
`CIFS protocols concurrently over a shared network port using the NetApp unified target adapter.”40
`These adapters interface with the memory, and are configured to exchange packets between the
`memory device and the switch interface.
`The Accused Products exchange the data between the memory interface and a memory device.
`
`“memory interface”: As discussed in Exemplary Evidence of Infringement of claim 1[e], the Accused
`Products have a memory interface that exchanges data between itself and a memory device. The
`Accused Products use, for example, RAID controllers that exchange data between memory devices
`such as hard disks or flash drives, and NetApp unified target adapters, as shown in Table 4 below.
`
`
`39 IV_NETAPP_000321 at 28 (emphases added).
`40 IV_NETAPP_000419 at 1 (emphasis added).
`
`
`14
`
`
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 14 of 16
`
`
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`Table 441
`
`[f] in the interfaces, performing
`error correction of the data in the
`packets exchanged over the
`channels.
`
`25. The method of claim 24
`wherein performing error
`correction of the data in the packets
`exchanged over the channels
`comprises: adding error correction
`codes to the packets being
`transferred over the channels;
`checking the error correction codes
`in the packets being received over
`the channels; and transferring a
`retry request if one of the packets
`being received has an error.
`31. The method of claim 24 further
`comprising: exchanging the
`packets between the switch
`interfaces and a bus interface over
`the channels; and exchanging the
`data between the bus interface and
`a bus.
`34. The method of claim 24
`wherein the channels each
`comprise a point-to-point
`
`41 IV_NETAPP_000408 at 5.
`
`
`“memory device”: The Accused Products are storage systems that comprise at least one memory
`device. Different storage systems have different maximum disk drive capacity as shown in Exemplary
`Evidence of Infringement of claim 11 at table 1.
`
`The Accused Products perform error correction of the data in the packets exchanged over the channels.
`As discussed extensively in Exemplary Evidence of Infringement of Claim 1, error correction occurs at
`each infringing interface comprising the Accused Products.
`
`The Accused Products perform the method of claim 24 wherein error correction of the data in the
`packets exchanged over the channels comprises: adding error correction codes to the packets being
`transferred over the channels; checking the error correction codes in the packets being received over the
`channels; and transferring a retry request if one of the packets being received has an error. See, e.g.,
`Exemplary Evidence of Infringement of claim 2.
`
`The Accused Products perform the method of claim 24 further comprising: exchanging the packets
`between the switch interfaces and a bus interface over the channels; and exchanging the data between
`the bus interface and a bus. See, e.g., Exemplary Evidence of Infringement of claim 8.
`
`The Accused Products perform the method of claim 24 wherein the channels each comprise a point-to-
`point connection between a pair of the interfaces. See, e.g., Exemplary Evidence of Infringement of
`claim 12.
`
`15
`
`NETAPP ET AL. EXHIBIT 1012
`
`Page 15 of 16
`
`
`
`Intellectual Ventures I, LLC et al v. NetApp, Inc., Case No. 1:16-cv-10868-IT
`
`
`
`connection between a pair of the
`inter