`
`European Patent Office
`
`Office europeen des brevets
`
`(11)
`
`EP 0 416 732 B1
`
`EUROPEAN PATENT SPECIFICATION
`
`(19)
`
`(12)
`
`(45) Date of publication and mention
`of the grant of the patent:
`30.12.1998 Bulletin 1998/53
`
`(21) Application number: 90308007.5
`
`(22) Date of filing: 20.07.1990
`
`(51) Int CI.6: GO6F 1/24, GO6F 11/00,
`GO6F 11/14
`
`(54) Targeted resets in a data processor
`
`Gezielte Rucksetzungen in einem Datenprozessor
`
`Remises a zero selectives dans un processeur de donn6es
`
`(84) Designated Contracting States:
`AT BE CH DE DK ES FR GB GR IT LI LU NL SE
`
`(30) Priority: 01.08.1989 US 388087
`
`(43) Date of publication of application:
`13.03.1991 Bulletin 1991/11
`
`(73) Proprietor: DIGITAL EQUIPMENT CORPORATION
`Maynard, MA 01754 (US)
`
`(72) Inventors:
`• Bruckert, William
`Northboro, Massachusetts 01532 (US)
`• Kovalcin, David
`Grafton, Massachusetts 01519 (US)
`• Bissett, Thomas D.
`Derry, New Hampshire 03038 (US)
`
`• Munzer, John
`Brookline, Massachusetts 02146 (US)
`• Norcross, Mitchell
`Nashua, New Hampshire 03062 (US)
`
`(74) Representative: Goodman, Christopher et al
`Eric Potter Clarkson,
`Park View House,
`58 The Ropewalk
`Nottingham NG1 5DD (GB)
`
`(56) References cited:
`EP-A- 0 077 154
`US-A- 4 580 232
`
`EP-A- 0 306 244
`US-A- 4 757 442
`
`• IBM TECHNICAL DISCLOSURE BULLETIN vol.
`29, no. 8 , January 1987 , NEW YORK US pages
`3562 - 3563 'PROGRAMMABLE SYSTEM AND
`POWER'
`
`Note: Within nine months from the publication of the mention of the grant of the European patent, any person may give
`notice to the European Patent Office of opposition to the European patent granted. Notice of opposition shall be filed in
`a written reasoned statement. It shall not be deemed to have been filed until the opposition fee has been paid. (Art.
`99(1) European Patent Convention).
`
`Printed by Jouve, 75001 PARIS (FR)
`
`EP 0 416 732 B1
`
`Sonos Ex. 1009, p. 1
` Sonos v. Google
` IPR2021-00964
`
`
`
`1
`
`EP 0 416 732 B1
`
`2
`
`Description
`
`I. BACKGROUND OF THE INVENTION
`
`The present invention relates to the field of resetting
`a data processor and, more particularly, to the field of
`managing different classes of resets in a data processor.
`All data processing systems need the capability of
`resetting under certain conditions, such as during power
`up or when certain errors occur. Without resets there
`would be no way to set the data processing system into
`a known state either to begin initialization routines or to
`begin error recovery routines.
`The problem with resets, however, is that they have
`wide-ranging effects. In general; resets disrupt the nor-
`mal flow of instruction execution and may cause a loss
`of data or information. Sometimes such drastic action is
`required to prevent more serious problems, but often the
`effect of the resets is worse than the condition which
`caused the resets.
`Another problem with resets in conventional ma-
`chines is that they are not localized. In other words, an
`entire data processing system is reset when only a por-
`tion needs to be. This is particularly a problem in sys-
`tems employing multiple processors such as for fault-
`tolerant applications. In such systems, an error in one
`of the processors can propagate to the other processors
`and bring the entire system to a halt. If the originating
`processor was in error in generating resets, then the ef-
`fect is to cause an unnecessary halt in execution.
`It would therefore be advantageous to design a sys-
`tem in which the resets are matched to the conditions
`which generated the reset.
`It would also be advantageous for such a system to
`have several classes of resets with different effects.
`It would be additionally advantageous if, in a multi-
`ple processor data processing system, the resets in one
`of the processors did not automatically propagate to the
`other processors.
`Additional advantages of this invention will be set
`forth in part in the description which follows and in part
`will be obvious from that description or may be learned
`by practising the invention. The advantages may be re-
`alized by the methods and apparatus particularly point-
`ed in the appended claims.
`US Patent 4,580,232 to Dungan et al, teaches des-
`ignation of one of the processors in a system as a master
`processor, in the event of a software crash; to reset the
`other processors. A reset signal is automatically con-
`veyed to the other processors by the master processor
`to restore the other processors back to normal opera-
`tion.
`IBM Technical Disclosure Bulletin, Vol. 29, No. 8,
`January 1987 teaches a technique that allows a pro-
`gram in the processor to invoke resets equivalent to sys-
`tem reset and power-on reset for unattended environ-
`ments.
`European published Patent Application No. 0 306
`
`244 A2 teaches a fault tolerant computer system with
`fault isolation and repair. Error checking devices detect
`presence of errors in the CPU. Error storage devices
`are coupled to transaction data storage devices and er-
`ror checking devices for stopping storage of additional
`messages in transaction data storage devices in the
`event of detected errors.
`
`II. SUMMARY OF THE INVENTION
`
`The present invention, in its broad form, resides in
`a method and system of resetting a data processing sys-
`tem without altering the sequence of instructions of the
`steps executed by the data processing system, as recit-
` in claims 1 and 11 respectively.
`
`III. BRIEF DESCRIPTION OF THE DRAWINGS
`
`5
`
`10
`
`15
`
`The accompanying drawings, which are incorporat-
`20 ed in and which constitute a part of this specification il-
`lustrate one embodiment of the invention and, together
`with the description of the invention, explain the princi-
`ples of the invention.
`
`Fig. 1 is a block diagram of a preferred embodiment
`of fault tolerant computer system which practices
`the present invention;
`Fig. 2 is an illustration of the physical hardware con-
`taining the fault tolerant computer system in Fig. 1,
`Fig. 3 is a block diagram of the CPU module shown
`in the fault tolerant computer system shown in Fig.
`1
`Fig. 4 is a block diagram of an interconnected CPU
`module and I/O module for the computer system
`shown in Fig. 1;
`Fig. 5 is a block diagram of a memory module for
`the fault tolerant computer system shown in Fig. 1;
`Fig. 6 is a detailed diagram of the elements of the
`control logic in the memory module shown in Fig. 5,
`Fig. 7 is a block diagram of portions of the primary
`memory controller of the CPU module shown in Fig.
`3;
`Fig. 8 is a block diagram of the DMA engine in the
`primary memory controller of the CPU module of
`Fig. 3;
`Fig. 9 is a diagram of error processing circuitry in
`the primary memory controller of the CPU module
`of Fig. 3;
`Fig. 10 is a drawing of some of the registers of the
`cross-link in the CPU module shown in Fig. 3;
`Fig. 11 is a block diagram of the elements which
`route control signals in the cross-links of the CPU
`module shown in Fig. 3;
`Fig. 12 is a block diagram of the elements which
`route data and address signals in the primary cross-
`link of the CPU module shown in Fig. 3;
`Fig. 13 is a state diagram showing the states for the
`cross-link of the CPU module shown in Fig. 3;
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`2
`
`Sonos Ex. 1009, p. 2
` Sonos v. Google
` IPR2021-00964
`
`
`
`3
`
`EP 0 416 732 B1
`
`4
`
`Fig. 14 is a block diagram of the timing system for
`the fault tolerant computer system of Fig. 1;
`Fig. 15 is a timing diagram for the clock signals gen-
`erated by the timing system in Fig. 14;
`Fig. 16 is a detailed diagram of a phase detector for
`the timing system shown in Fig. 14;
`Fig. 17 is a block diagram of an I/O module for the
`computer system of Fig. 1:
`Fig. 18 is a block diagram of the firewall element in
`the I/O module shown in Fig. 17;
`Fig. 19 is a detailed diagram of the elements of the
`cross-link pathway for the computer system of Fig.
`1;
`Figs. 20A-20E are data flow diagrams for the com-
`puter system in Fig. 1;
`Fig. 21 is a block diagram of zone 20 showing the
`routing of reset signals;
`Fig. 22 is a block diagram of the components in-
`volved in resets in the CPU module shown in Fig.
`3; and
`Fig. 23 is a diagram of clock reset circuitry.
`
`IV. DESCRIPTION OF THE PREFERRED
`EMBODIMENT
`
`Reference will now be made in detail to a presently
`preferred embodiment of the invention, an example of
`which is illustrated in the accompanying drawings.
`
`A. SYSTEM DESCRIPTION
`
`Fig. 1 is a block diagram of a fault tolerant computer
`system 10 in accordance with the present invention.
`Fault tolerant computer system 10 includes duplicate
`systems, called zones. In the normal mode, the two
`zones 11 and 11' operate simultaneously. The duplica-
`tion ensures that there is no single point of failure and
`that a single error or fault in one of the zones 11 or 11'
`will not disable computer system 10. Furthermore, all
`such faults can be corrected by disabling or ignoring the
`device or element which caused the fault. Zones 11 and
`11' are shown in Fig. 1 as respectively including dupli-
`cate processing systems 20 and 20'. The duality, how-
`ever, goes beyond the processing system.
`Fig. 2 contains an illustration of the physical hard-
`ware of fault tolerant computer system 10 and graphi-
`cally illustrates the duplication of the systems. Each
`zone 11 and 11' is housed in a different cabinet 12 and
`12', respectively. Cabinet 12 includes battery 13, power
`regulator 14, cooling fans 16, and AC input 17. Cabinet
`12' includes separate elements corresponding to ele-
`ments 13, 14, 16 and 17 of cabinet 12.
`As explained in greater detail below, processing
`systems 20 and 20' include several modules intercon-
`nected by backplanes. If a module contains a fault or
`error, that module may be removed and replaced with-
`out disabling computing system 10. This is because
`processing systems 20 and 20' are physically separate,
`
`20
`
`15
`
`have separate backplanes into which the modules are
`plugged, and can operate independently of each other.
`Thus modules can be removed from and plugged into
`the backplane of one processing system while the other
`5 processing system continues to operate.
`In
`the preferred embodiment,
`the duplicate
`processing systems 20 and 20' are identical and contain
`identical modules. Thus, only processing system 20 will
`be described completely with the understanding that
`10 processing system 20' operates equivalently.
`Processing system 20 includes CPU module 30
`which is shown in greater detail in Figs. 3 and 4. CPU
`module 30 is interconnected with CPU module 30' in
`processing system 20' by a cross-link pathway 25 which
`is described in greater detail below. Cross-link pathway
`25 provides data transmission paths between process-
`ing systems 20 and 20' and carries timing signals to en-
`sure that processing systems 20 and 20' operate syn-
`chronously.
`Processing system 20 also includes I/O modules
`100, 110, and 120. 1/0 modules 100, 110, 120, 100', 110'
`and 120' are independent devices. I/O module 100 is
`shown in greater detail in Figs. 1, 4, and 17. Although
`multiple I/O modules are shown, duplication of such
`25 modules is not a requirement of the system. Without
`such duplication, however, some degree of fault toler-
`ance will be lost.
`Each of the I/O modules 100, 110 and 120 is con-
`nected to CPU module 30 by dual rail module intercon-
`30 nects 130 and 132. Module interconnects 130 and 132
`serve as the I/O interconnect and are routed across the
`backplane for processing system 20. For purposes of
`this application, the data pathway including CPU 40,
`memory controller 70, cross-link 90 and module inter-
`35 connect 130 is considered as one rail, and the data path-
`way including CPU 50, memory controller 75, cross-link
`95, and module interconnect 132 is considered as an-
`other rail. During proper operation, the data on both rails
`is the same.
`
`B. FAULT TOLERANT SYSTEM PHILOSOPHY
`
`Fault tolerant computer system 10 does not have a
`single point of failure because each element is duplicat-
`ed. Processing systems 20 and 20' are each a fail stop
`processing system which means that those systems can
`detect faults or errors in the subsystems and prevent
`uncontrolled propagation of such faults and errors to
`other subsystems, but they have a single point of failure
`because the elements in each processing system are
`not duplicated.
`The two fail stop processing systems 20 and 20' are
`interconnected by certain elements operating in a de-
`fined manner to form a fail safe system. In the fail safe
`system embodied as fault tolerant computer system 10,
`the entire computer system can continue processing
`even if one of the fail stop processing systems 20 and
`20' is faulting.
`
`40
`
`45
`
`50
`
`55
`
`3
`
`Sonos Ex. 1009, p. 3
` Sonos v. Google
` IPR2021-00964
`
`
`
`5
`
`EP 0 416 732 B1
`
`6
`
`The two fail stop processing systems 20 and 20' are
`considered to operate in lockstep synchronism because
`CPUs 40, 50, 40' and 50' operate in such synchronism.
`There are three significant exceptions. The first is at in-
`itialization when a bootstrapping technique brings both
`processors into synchronism. The second exception is
`when the processing systems 20 and 20' operate inde-
`pendently (asynchronously) on two different workloads.
`The third exception occurs when certain errors arise in
`processing systems 20 and 20'. In this last exception,
`the CPU and memory elements in one of the processing
`systems is disabled, thereby ending synchronous oper-
`ation.
`When the system is running in lockstep I/O, only
`one I/O device is being accessed at any one time. All
`four CPUs 40, 50, 40' and 50', however, would receive
`the same data from that I/O device at substantially the
`same time. In the following discussion, it will be under-
`stood that lockstep synchronization of processing sys-
`tems means that only one I/O module is being accessed.
`The synchronism of duplicate processing systems
`20 and 20' is implemented by treating each system as
`a deterministic machine which, starting in the same
`known state and upon receipt of the same inputs, will
`always enter the same machine states and produce the
`same results in the absence of error. Processing sys-
`tems 20 and 20' are configured identically, receive the
`same inputs, and therefore pass through the same
`states. Thus, as long as both processors operate syn-
`chronously, they should produce the same results and
`enter the same state. If the processing systems are not
`in the same state or produce different results, it is as-
`sumed that one of the processing systems 20 and 20'
`has faulted. The source of the fault must then be isolated
`in order to take corrective action, such as disabling the
`faulting module.
`Error detection generally involves overhead in the
`form of additional processing time or logic. To minimize
`such overhead, a system should check for errors as in-
`frequently as possible consistent with fault tolerant op-
`eration. At the very least, error checking must occur be-
`fore data is outputted from CPU modules 30 and 30'.
`Otherwise, internal processing errors may cause im-
`proper operation in external systems, like a nuclear re-
`actor, which is the condition that fault tolerant systems
`are designed to prevent.
`There are reasons for additional error checking. For
`example, to isolate faults or errors it is desirable to check
`the data received by CPU modules 30 and 30' prior to
`storage or use. Otherwise, when erroneous stored data
`is later accessed and additional errors result, it becomes
`difficult or impossible to find the original source of errors,
`especially when the erroneous data has been stored for
`some time. The passage of time as well as subsequent
`processing of the erroneous data may destroy any trail
`back to the source of the error.
`"Error latency," which refers to the amount of time
`an error is stored prior to detection, may cause later
`
`5
`
`problems as well. For example, a seldom-used routine
`may uncover a latent error when the computer system
`is already operating with diminished capacity due to a
`previous error. When the computer system has dimin-
`ished capacity, the latent error may cause the system to
`crash.
`Furthermore, it is desirable in the dual rail systems
`of processing systems 20 and 20' to check for errors
`prior to transferring data to single rail systems, such as
`10 a shared resource like memory. This is because there
`are no longer two independent sources of data after
`such transfers, and if any error in the single rail system
`is later detected, then error tracing becomes difficult if
`not impossible. The preferred method of error handling
`is set forth in Application No. 90308000.0 filed this same
`date entitled, "Software Error Handling", and published
`as EP-0415545.
`
`15
`
`C MODULE DESCRIPTION
`
`20
`
`1. CPU Module
`
`The elements of CPU module 30 which appear in
`Fig. 1 are shown in greater detail in Figs. 3 and 4. Fig.
`25 3 is a block diagram of the CPU module, and Fig. 4
`shows block diagrams of CPU module 30 and I/O mod-
`ule 100 as well as their interconnections. Only CPU
`module 30 will be described since the operation of and
`the elements included in CPU modules 30 and 30' are
`30 generally the same.
`CPU module 30 contains dual CPUs 40 and 50.
`CPUs 40 and 50 can be standard central processing
`units known to persons of ordinary skill. In the preferred
`embodiment, CPUs 40 and 50 are VAX microprocessors
`35 manufactured by Digital Equipment Corporation, the as-
`signee of this application.
`Associated with CPUs 40 and 50 are cache mem-
`ories 42 and 52, respectively, which are standard cache
`RAMs of sufficient memory size for the CPUs. In the pre-
`(erred embodiment, the cache RAM is 4K x 64 bits. It is
`not necessary for the present invention to have a cache
`RAM, however.
`
`40
`
`2. Memory Module
`
`Preferably, CPU's 40 and 50 can share up to four
`memory modules 60. Fig. 5 is a block diagram of one
`memory module 60 shown connected to CPU module
`30.
`
`During memory transfer cycles, status register
`transfer cycles, and EEPROM transfer cycles, each
`memory module 60 transfers data to and from primary
`memory controller 70 via a bidirectional data bus 85.
`Each memory module 60 also receives address, control,
`timing, and ECC signals from memory controllers 70
`and 75 via buses 80 and 82, respectively. The address
`signals on buses 80 and 82 include board, bank, and
`row and column address signals that identify the mem-
`
`45
`
`50
`
`55
`
`4
`
`Sonos Ex. 1009, p. 4
` Sonos v. Google
` IPR2021-00964
`
`
`
`7
`
`EP 0 416 732 B1
`
`8
`
`ory board, bank, and row and column address involved
`in the data transfer.
`As shown in Fig. 5, each memory module 60 in-
`cludes a memory array 600. Each memory array 600 is
`a standard RAM in which the DRAMs are organized into
`eight banks of memory. In the preferred embodiment,
`fast page mode type DRAMs are used.
`Memory module 60 also includes control logic 610,
`data transceivers/registers 620, memory drivers 630,
`and an EEPROM 640. Data transceivers/receivers 620
`provide a data buffer and data interface for transferring
`data between memory array 600 and the bidirectional
`data lines of data bus 85. Memory drivers 630 distribute
`row and column address signals and control signals
`from control logic 610 to each bank in memory array 600
`to enable transfer of a longword of data and its corre-
`sponding ECC signals to or from the memory bank se-
`lected by the memory board and bank address signals.
`EEPROM 640, which can be any type of NVRAM
`(nonvolatile RAM), stores memory error data for off-line
`repair and configuration data, such as module size.
`When the memory module is removed after a fault,
`stored data is extracted from EEPROM 640 to deter-
`mine the cause of the fault. EE PROM 640 is addressed
`via row address lines from drivers 630 and by EEPROM
`control signals from control logic 610. EEPROM 640
`transfers eight bits of data to and from a thirty-two bit
`internal memory data bus 645.
`Control logic 610 routes address signals to the ele-
`ments of memory module 60 and generates internal tim-
`ing and control signals. As shown in greater detail in Fig.
`6, control logic 610 includes a primary/mirror designator
`circuit 612.
`Primary/mirror designator circuit 612 receives two
`sets of memory board address, bank address, row and
`column address, cycle type, and cycle timing signals
`from memory controllers 70 and 75 on buses 80 and 82,
`and also transfers two sets of ECC signals to or from
`the memory controllers on buses 80 and 82. Transceiv-
`ers/registers in designator 612 provide a buffer and in-
`terface for transferring these signals to and from mem-
`ory buses 80 and 82. A primary/mirror multiplexer bit
`stored in status registers 618 indicates which one of
`memory controllers 70 and 75 is designated as the pri-
`mary memory controller and which is designated as the
`mirror memory controller, and a primary/mirror multi-
`plexer signal is provided from status registers 618 to
`designator 612.
`Primary/mirror designator 612 provides two sets of
`signals for distribution in control logic 610. One set of
`signals includes designated primary memory board ad-
`dress, bank address, row and column address, cycle
`type, cycle timing, and ECC signals. The other set of
`signals includes designated mirror memory board ad-
`dress, bank address, row and column address, cycle
`type, cycle timing, and ECC signals. The primary/mirror
`multiplexer signal is used by designator 612 to select
`whether the signals on buses 80 and 82 will be respec-
`
`5
`
`10
`
`20
`
`tively routed to the lines for carrying designated primary
`signals and to the lines for carrying designated mirror
`signals, or vice-versa.
`A number of time division multiplexed bidirectional
`lines are included in buses 80 and 82. At certain times
`after the beginning of memory transfer cycles, status
`register transfer cycles, and EEPROM transfer cycles,
`FCC signals corresponding to data on data bus 85 are
`placed on these time division multiplexed bidirectional
`lines. If the transfer cycle is a write cycle, memory mod-
`ule 60 receives data and ECC signals from the memory
`controllers. If the transfer cycle is a read cycle, memory
`module 60 transmits data and ECC signals to the mem-
`ory controllers. At other times during transfer cycles, ad-
`15 dress, control, and timing signals are received by mem-
`ory module 60 on the time division multiplexed bidirec-
`tional lines. Preferably, at the beginning of memory
`transfer cycles, status register transfer cycles, and EEP-
`ROM transfer cycles, memory controllers 70 and 75
`transmit memory board address, bank address, and cy-
`cle type signals on these timeshared lines to each mem-
`ory module 60.
`Preferably, row address signals and column ad-
`dress signals are multiplexed on the same row and col-
`25 umn address lines during transfer cycles. First, a row
`address is provided to memory module 60 by the mem-
`ory controllers, followed by a column address about six-
`ty nanoseconds later.
`A sequencer 616 receives as inputs a system clock
`30 signal and a reset signal from CPU module 30, and re-
`ceives the designated primary cycle timing, designated
`primary cycle type, designated mirror cycle timing, and
`designated mirror cycle type signals from the transceiv-
`ers/registers in designator 612.
`Sequencer 616 is a ring counter with associated
`steering logic that generates and distributes a number
`of control and sequence timing signals for the memory
`module that are needed in order to execute the various
`types of cycles. The control and sequence timing signals
`40 are generated from the system clock signals, the desig-
`nated primary cycle timing signals, and the designated
`primary cycle type signals.
`Sequencer 616 also generates a duplicate set of se-
`quence timing signals from the system clock signals, the
`45 designated mirror cycle timing signals, and the desig-
`nated mirror cycle type signals. These duplicate se-
`quence timing signals are used for error checking. For
`data transfers of multi-long words of data to and from
`memory module 60 in a fast page mode, each set of
`50 column addresses starting with the first set is followed
`by the next column address 120 nanoseconds later, and
`each long word of data is moved across bus 85 120 na-
`noseconds after the previous long word of data.
`Sequencer 616 also generates tx/rx register control
`signals. The tx/rx register control signals are provided
`to control the operation of data transceivers/registers
`620 and the transceivers/registers in designator 612.
`The direction of data flow is determined by the steering
`
`35
`
`SS
`
`5
`
`Sonos Ex. 1009, p. 5
` Sonos v. Google
` IPR2021-00964
`
`
`
`9
`
`EP 0 416 732 B1
`
`10
`
`10
`
`ory module 60 to execute a cycle, the designated pri-
`mary cycle timing signal is not asserted, and the se-
`quencer remains in state SEQ IDLE. The sequencer is
`started (enters state SEQ 1) in response to assertion by
`5 memory controller 70 of the cycle timing signal on bus
`80, provided control logic 610 and sequencer 616 are
`located in the memory module selected by memory
`board address signals also transmitted from memory
`controller 70 on bus 80. The rising edge of the first sys-
`tem clock signal following assertion of the designated
`primary cycle active signal corresponds to transition Ti.
`As indicated previously, in the case of transfers of
`a single longword to or from memory array 600, the cycle
`is performed in ten system clock periods. The sequenc-
`15 er proceeds from SEQ IDLE, to states SEQ 1 through
`SEQ 9, and returns to SEQ IDLE.
`Memory read and write cycles may be extended,
`however, to transfer additional longwords. Memory ar-
`ray 600 preferably uses "fast page mode" DRAMs. Dur-
`ing multi-longword reads and writes, transfers of data to
`and from the memory array after transfer of the first long-
`word are accomplished by repeatedly updating the col-
`umn address and regenerating a CAS (column address
`strobe) signal.
`During multi-longword transfer cycles, these up-
`dates of the column address can be implemented be-
`cause sequencer 616 repeatedly loops from states SEQ
`4 through SEQ 7 until all of the longwords are trans-
`ferred. For example, if three longwords are being read
`from or written into memory array 600, the sequencer
`enters states SEQ IDLE, SEQ 1, SEQ 2, SEQ 3, SEQ
`4, SEQ 5, SEQ 6, SEQ 7, SEQ 4, SEQ 5, SEQ 6, SEQ
`7, SEQ 4, SEQ 5, SEQ 6, SEQ 7, SEQ 8, SEQ 9, and
`SEQ IDLE.
`During a memory transfer cycle, the designated pri-
`mary cycle timing signal is monitored by sequencer 616
`during transition T6 to determine whether to extend the
`memory read or write cycle in order to transfer at least
`one additional longword. At times when the designated
`40 primary cycle timing signal is asserted during transition
`T6, the sequencer in state SEQ 7 will respond to the
`next system clock signal by entering state SEQ 4 in-
`stead of entering state SEQ 8.
`In the case of a multi-longword transfer, the desig-
`45 nated primary cycle timing signal is asserted at least fif-
`teen nanoseconds before the first T6 transition and re-
`mains asserted until the final longword is transferred. In
`order to end a memory transfer cycle after the final long-
`word has been transferred, the designated primary cy-
`50 cle timing signal is deasserted at least fifteen nanosec-
`onds before the last T6 transition and remains deassert-
`ed for at least ten nanoseconds after the last T6 transi-
`tion.
`During memory transfer cycles, the designated pri-
`mary row address signals and the designated primary
`column address signals are presented at different times
`by designator 612 in control logic 610 to memory drivers
`630 on a set of time division multiplexed lines. The out-
`
`logic in sequencer 616, which responds to the designat-
`ed primary cycle type signals by generating tx/rx control
`and sequence timing signals to indicate whether and
`when data and ECC signals should be written into or
`read from the transceivers/registers in memory module
`60. Thus, during memory write cycles, status register
`write cycles, and EEPROM write cycles, data and FCC
`signals will be latched into the transceivers/registers
`from buses 80, 82, and 85, while during memory read
`cycles, status register read cycles, and EEPROM read
`cycles, data and ECC signals will be latched into the
`transceivers/registers from memory array 600, status
`registers 618, or EEPROM 640 for output to CPU mod-
`ule 30.
`Sequencer 616 also generates EEPROM control
`signals to control the operation of EEPROM 640.
`The timing relationships that exist in memory mod-
`ule 60 are specified with reference to the rise time of the
`system clock signal, which has a period of thirty nano-
`seconds. All status register read and write cycles, and
`all memory read and write cycles of a single longword,
`are performed in ten system clock periods, i.e., 300 na-
`noseconds. Memory read and write transfer cycles may
`consist of multi-longword transfers. For each additional
`longword that is transferred, the memory transfer cycle
`is extended for four additional system clock periods.
`Memory refresh cycles and EEPROM write cycles re-
`quire at least twelve system clock periods to execute,
`and EEPROM read cycles require at least twenty sys-
`tem clock periods.
`The designated primary cycle timing signal causes
`sequencer 616 to start generating the sequence timing
`and control signals that enable the memory module se-
`lected by the memory board address signals to imple-
`ment a requested cycle. The transition of the designated
`primary cycle timing signal to an active state marks the
`start of the cycle. The return of the designated primary
`cycle timing signal to an inactive state marks the end of
`the cycle.
`The sequence timing signals generated by se-
`quencer 616 are associated with the different states en-
`tered by the sequencer as a cycle requested by CPU
`module 30 is executed. In order to specify the timing re-
`lationship among these different states (and the timing
`relationship among sequence timing signals corre-
`sponding to each of these states), the discrete states
`that may be entered by sequencer 616 are identified as
`states SEQ IDLE and SEQ 1 to SEQ 19. Each state lasts
`for a single system clock period (thirty nanoseconds).
`Entry by sequencer 616 into each different state is trig-
`gered by the leading edge of the system clock signal.
`The leading edges of the system clock signal that cause
`sequencer 616 to enter states SEQ IDLE and SEQ 1 to
`SEQ 19 are referred to as transitions T IDLE and T1 to
`T19 to relate them to the sequencer states, i.e., TN is
`the system clock signal leading edge that causes se-
`quencer 616 to enter state SEQ N.
`At times when CPU module 30 is not directing mem-
`
`20
`
`25
`
`30
`
`35
`
`SS
`
`6
`
`Sonos Ex. 1009, p. 6
` Sonos v. Google
` IPR2021-00964
`
`
`
`11
`
`EP 0 416 732 B1
`
`12
`
`puts of drivers 630 are applied to the address inputs of
`the DRAMs in memory array 600, and also are returned
`to control logic 610 for comparison with the designated
`mirror row and column address signals to check for er-
`rors. During status register transfer cycles and EEP-
`ROM transfer cycles, column address signals are not
`needed to select a particular storage location.
`During a memory transfer cycle, row address sig-
`nals are the first signals presented on the timeshared
`row and column address lines of buses 80 and 82. Dur-
`ing state SEQ IDLE, row address signals are transmitted
`by the memory controllers on the row and column ad-
`dress lines, and the row address is stable from at least
`fifteen nanoseconds before the T1 transition until ten na-
`noseconds after the T1 transition. Next, column address
`signals are transmitted by the memory controllers on the
`row and column address lines, and the column address
`is stable from at least ten nanoseconds before the T3
`transition until fifteen nanoseconds after the T4 transi-
`tion. In the case of multi-longword transfers during mem-
`ory transfer cycles, subsequent column address signals
`are then transmitted on the row and column address
`lines, and these subsequent column addresses are sta-
`ble from ten nanoseconds before the T6 transition until
`fifteen nanoseconds after the T7 transition.
`Generator/checker 617 receives the two sets of se-
`quence timing signals generated by sequencer 616. In
`addition, the designated primary cycle type and bank
`address signals and the designated mirror cycle type
`and ban