`Approved for use through 09/30/2010. OMB 0651-0032
`U.S. Patent and Trademark Office. U.S. DEPARTMENT OF COMMERCE
`Under the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number.
`
`UTILITY
`PATENT APPLICATION
`TRANSMITTAL
`(Only fornew nonprovisional applications under 37 CFR 1.53(b))|ewressMailtabeino.|
`
`APPLICATION ELEMENTS
`See MPEPchapter 600 concerning utility patent application contents.
`
`1. _] Fee Transmittal Form (e.g., PTO/SB/17)
`
`2.
`
`3.
`
`4.
`
`Applicant claims small entity status.
`See 37 CFR 1.27.
`43
`[Total Pages
`Specification
`Both the claims and abstract must start on a new page
`(For information on the preferred arrangement, see MPEP 608.01(a))
`Drawing(s) (35 U.S.C. 113)
`[Total Sheets
`33
`
`4
`
`i.
`
`ADDRESSTO:
`
`Commissioner for Patents
`P.O. Box 1450
`Alexandria VA 22313-1450
`
`ACCOMPANYING APPLICATION PARTS
`
`9. L] Assignment Papers (cover sheet & document(s))
`
`Nameof Assignee
`
`
`Powerof
`10. [_] 37 CFR 3.73(b) Statement
`[Total Sheets
`5. Oath or Declaration
`Attorney
`(whenthere is an assignee)
`a.
`Newly executed (original or copy)
`b.|_] Acopy fromaprior application (37 CFR 1.63(d))
`for continuation/divisional with Box 18 completed)
`11. [_] English Translation Document(if applicable)
`DELETION OF INVENTOR(S)
`Signed statementattached deleting inventor(s)
`12.
`Information Disclosure Statement (PTO/SB/08 or PTO-1449)
`namein the prior application, see 37 CFR
`:
`ard
`4.63(d)(2) and 1.33(b).
`Copiesof citations attached
`6.[_] Application Data Sheet. See 37 CFR 1.76
`Cl Preliminary Amendment
`7. LC CD-ROMor CD-Rin duplicate, large table or
`Oo Return Receipt Postcard (MPEP 503)
`puter Program (Appendix)
`LandscapeTable on CD
`(Should be specifically itemized)
`8. Nucleotide and/or Amino Acid Sequence Submission
`- [_] Certified Copy of Priority Document(s)
`(if applicable, items a. — c. are required)
`(if foreign priority is claimed)
`a.
`[
`] Computer Readable Form (CRF
`b.
`Specification Sequence listng )
`- [_] Nonpublication Request under 35 U.S.C. 122(b)(2)(B)(i).
`i LL]
`cp-Romor CD-R(2 copies); or
`Con
`il. _]
`Paper
`Cc. LJ Statements verifying identity of above copies
`If a CONTINUING APPLICATION, check appropriate box, and supply the requisite information below andin thefirst sentence ofthe
`18.
`specification following thetitle, or in an Application Data Sheet under 37 CFR 1.76:
`of prior application NO.: ...........:eceeeeeeeeeeeeee teens
`| Continuation
`| Divisional
`LC Continuation-in-part (CIP)
`
`Art Unit:
`Prior application information:
`Examiner
`
`19. CORRESPONDENCE ADDRESS
`
`Applicant must attach form PTO/SB/35 or equivalent.
`
`
`
`
`
`.
`
`er:
`
`LJ The address associated with Customer Number: fo OR
`
`Correspondence address below
`
`Address
`
`36 Greenleigh Drive
`
`Telephone
`
`[356-266-5145
`
`|_Email__]ash@ipprocurement.com
`
`Name
`
`Ashok Tankha
`
`Registration No.
`‘Attorney/Aent
`
`33802
`
`This collection of information is required by 37 CFR 1.53(b). The information is required to obtain or retain a benefit by the public whichis to file (and by the
`USPTOto process) an application. Confidentiality is governed by 35 U.S.C. 122 and 37 CFR 1.11 and 1.14. This collection is estimated to take 12 minutes to
`complete, including gathering, preparing, and submitting the completed application form to the USPTO. Timewill vary depending upon the individual case. Any
`comments on the amount of time you require to complete this form and/or suggestions for reducing this burden, should be sent to the Chief Information Officer,
`U.S. Patent and Trademark Office, U.S. Department of Commerce, P.O. Box 1450, Alexandria, VA 22313-1450. DO NOT SEND FEES OR COMPLETED
`FORMSTO THIS ADDRESS. SEND TO: Commissionerfor Patents, P.O. Box 1450, Alexandria, VA 22313-1450.
`If you need assistance in completing the form, call 1-800-PTO-9199 and select option 2.
`SONOS EXHIBIT 1018
`
`Page1 of 200
`
`Page 1 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`MICROPHONE ARRAY SYSTEM
`
`CROSS REFERENCE TO RELATED APPLICATIONS
`
`[0001] This application claims the benefit of provisional patent application number
`
`61/403,952 titled “Microphone array design and implementation for telecommunications
`
`and handheld devices”, filed on September 24, 2010 in the United States Patent and
`
`Trademark Office.
`
`[0002] The specification of the above referenced patent application is incorporated
`
`herein by referencein its entirety.
`
`BACKGROUND
`
`[0003] Microphonesconstitute an important element in today’s speech acquisition
`
`devices. Currently, most of the hands-free speech acquisition devices, for example,
`
`mobile devices, lapels, headsets, etc., convert sound into electrical signals by using a
`
`microphone embedded within the speech acquisition device. However, the paradigm of a
`
`single microphoneoften does not work effectively because the microphone picks up
`
`many ambientnoise signals in addition to the desired sound, specifically when the
`
`distance between a user and the microphoneis more than a few inches. Therefore, there is
`
`a need for a microphone system that operates under a variety of different ambient noise
`
`conditions and that places fewer constraints on the user with respect to the microphone,
`
`thereby eliminating the need to wear the microphoneorbe in close proximity to the
`
`microphone.
`
`[0004] To mitigate the drawbacksof the single microphone system,there is a need for a
`
`microphonearray that achieves directional gain in a preferred spatial direction while
`
`suppressing ambient noise from other directions. Conventional microphonearrays
`
`include arrays that are typically developed for applications such as radar and sonar, but
`
`are generally not suitable for hands-free or handheld speech acquisition devices. The
`
`Page 2 of 200
`
`SONOS EXHIBIT 1018
`
`Page 2 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`main reasonis that the desired sound signal has an extremely wide bandwidth relative to
`
`its center frequency, thereby rendering conventional narrowband techniques employed in
`
`the conventional microphonearrays unsuitable. In order to cater to such broadband
`
`speech applications, the array size needsto be vastly increased, making the conventional
`
`microphonearrays large and bulky, and precluding the conventional microphonearrays
`
`from having broader applications, for example, in mobile and handheld communication
`
`devices. There is a need for a microphonearray system that provides an effective
`
`response over a wide spectrum of frequencies while being unobtrusive in termsofsize.
`
`[0005] Hence, there is a long felt but unresolved need for a broadband microphone
`
`array and broadband beamforming system that enhances acoustics of a desired sound
`
`signal while suppressing ambientnoise signals.
`
`SUMMARYOF THE INVENTION
`
`[0006] This summaryis provided to introduce a selection of concepts in a simplified
`
`form that are further described in the detailed description of the invention. This summary
`
`is not intended to identify key or essential inventive concepts of the claimed subject
`
`matter, nor is it intended for determining the scope of the claimed subject matter.
`
`[0007] The method and system disclosed herein addresses the above stated need for
`
`enhancing acoustics of a target sound signal received from a target sound source, while
`
`suppressing ambientnoise signals. As used herein, the term “target sound signal” refers
`
`to a sound signal from a desired or target sound source, for example, a person’s speech
`
`that needs to be enhanced. A microphone array system comprising an array of sound
`
`sensors positioned in an arbitrary configuration, a sound sourcelocalization unit, an
`
`adaptive beamforming unit, and a noise reduction unit, is provided. The sound source
`
`localization unit, the adaptive beamforming unit, and the noise reduction unit are in
`
`operative communication with the array of sound sensors. The array of sound sensorsis,
`
`for example,a linear array of sound sensors, a circular array of sound sensors, or an
`
`arbitrarily distributed coplanar array of sound sensors. The array of sound sensors herein
`
`Page 3 of 200
`
`SONOS EXHIBIT1018
`
`Page 3 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`referred to as a “microphone array” receives soundsignals from multiple disparate sound
`
`sources. The method disclosed herein can be applied on a microphonearray with an
`
`arbitrary number of sound sensors having, for example, an arbitrary two dimensional (2D)
`
`configuration. The sound signals received by the sound sensors in the microphonearray
`
`comprise the target sound signal from the target sound source amongthe disparate sound
`
`sources, and ambientnoise signals.
`
`[0008] The sound sourcelocalization unit estimates a spatial location of the target
`
`sound signal from the received soundsignals, for example, using a steered response
`
`power-phasetransform. The adaptive beamforming unit performs adaptive beamforming
`
`for steering a directivity pattern of the microphonearrayin a direction of the spatial
`
`location of the target sound signal. The adaptive beamforming unit thereby enhances the
`
`target sound signal from the target sound source and partially suppresses the ambient
`
`noise signals. The noise reduction unit suppresses the ambient noise signals for further
`
`enhancing the target sound signal received from the target sound source.
`
`[0009]
`
`In an embodiment wherethe target sound source that emits the target sound
`
`signal is in a two dimensional plane, a delay between each of the sound sensors and an
`
`origin of the microphonearray is determined as a function of distance between each of
`
`the sound sensors and the origin, a predefined angle between each of the sound sensors
`
`and a reference axis, and an azimuth angle between the reference axis and the target
`
`sound signal. In another embodiment where the target sound source that emits the target
`
`sound signalis in a three dimensionalplane, the delay between each of the sound sensors
`
`and the origin of the microphonearray is determined as a function of distance between
`
`each of the sound sensors and the origin, a predefined angle between each of the sound
`
`sensors anda first reference axis, an elevation angle between a second reference axis and
`
`the target sound signal, and an azimuth angle betweenthefirst reference axis and the
`
`target sound signal. This method of determining the delay enables beamforming for
`
`arbitrary numbers of sound sensors and multiple arbitrary microphonearray
`
`configurations. The delay is determined, for example, in terms of numberof samples.
`
`Page4 of 200
`
`SONOS EXHIBIT 1018
`
`Page 4 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`Once the delay is determined, the microphonearray can be aligned to enhancethe target
`
`sound signal from a specific direction.
`
`[0010] The adaptive beamforming unit comprises a fixed beamformer, a blocking
`
`matrix, and an adaptivefilter. The fixed beamformersteers the directivity pattern of the
`
`microphonearray in the direction of the spatial location of the target sound signal from
`
`the target sound source for enhancing the target sound signal, when the target sound
`
`source is in motion. The blocking matrix feeds the ambientnoise signals to the adaptive
`
`filter by blocking the target sound signal from the target sound source. The adaptivefilter
`
`adaptively filters the ambient noise signals in response to detecting the presence or
`
`absence of the target sound signal in the sound signals received from the disparate sound
`
`sources. The fixed beamformer performs fixed beamforming, for example, byfiltering
`
`and summing output sound signals from the sound sensors.
`
`[0011]
`
`In an embodiment, the adaptive filtering comprises sub-band adaptivefiltering.
`
`The adaptive filter comprises an analysis filter bank, an adaptive filter matrix, and a
`
`synthesis filter bank. The analysis filter bank splits the enhanced target sound signal from
`
`the fixed beamformerand the ambient noise signals from the blocking matrix into
`
`multiple frequency sub-bands. The adaptive filter matrix adaptively filters the ambient
`
`noise signals in each of the frequency sub-bandsin response to detecting the presence or
`
`absence of the target sound signal in the sound signals received from the disparate sound
`
`sources. The synthesis filter bank synthesizes a full-band sound signal using the
`
`frequency sub-bandsof the enhanced target sound signal. In an embodiment, the adaptive
`
`beamforming unit further comprises an adaptation control unit for detecting the presence
`
`of the target sound signal and adjusting a step size for the adaptivefiltering in response to
`
`detecting the presence or the absence of the target sound signal in the soundsignals
`
`received from the disparate sound sources.
`
`[0012] The noise reduction unit suppresses the ambient noise signals for further
`
`enhancing the target sound signal from the target sound source. The noise reduction unit
`
`performsnoise reduction, for example, by using a Wiener-filter based noise reduction
`
`Page 5 of 200
`
`SONOS EXHIBIT1018
`
`Page 5 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based
`
`noise reduction algorithm, or a model based noise reduction algorithm. The noise
`
`reduction unit performs noise reduction in multiple frequency sub-bands employed for
`
`sub-band adaptive beamformingby the analysis filter bank of the adaptive beamforming
`
`unit.
`
`[0013] The microphone array system disclosed herein comprising the microphone array
`
`with an arbitrary number of sound sensors positionedin arbitrary configurations can be
`implemented in handheld devices, for example, the iPad® of Apple Inc., the iPhone® of
`
`Apple Inc., smart phones, tablet computers, laptop computers, etc. The microphonearray
`
`system disclosed herein can further be implemented in conference phones, video
`
`conferencing applications, or any device or equipmentthat needs better speech inputs.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0014] The foregoing summary, as well as the following detailed description of the
`
`invention, is better understood when read in conjunction with the appended drawings. For
`
`the purposeofillustrating the invention, exemplary constructions of the invention are
`
`shownin the drawings. However, the invention is not limited to the specific methods and
`
`instrumentalities disclosed herein.
`
`[0015] FIG. 1 illustrates a method for enhancing a target sound signal from multiple
`
`sound signals.
`
`[0016]
`
`FIG. 2 illustrates a system for enhancing a target sound signal from multiple
`
`sound signals.
`
`[0017] FIG. 3 exemplarily illustrates a microphonearray configuration showing a
`
`microphonearray having N sound sensorsarbitrarily distributed on a circle.
`
`Page6 of 200
`
`SONOS EXHIBIT1018
`
`Page 6 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`[0018] FIG. 4 exemplarily illustrates a graphical representation of a filter-and-sum
`
`beamforming algorithm for determining output of the microphone array having N sound
`
`sensors.
`
`[0019] FIG. 5 exemplarily illustrates distances between an origin of the microphone
`
`array and sound sensor M, and sound sensor M3 in the circular microphone array
`
`configuration, when the target sound signalis at an angle @ from the Y-axis.
`
`[0020] FIG. 6A exemplarily illustrates a table showing the distance between each sound
`
`sensor in a circular microphonearray configuration from the origin of the microphone
`
`array, when the target sound source is in the same plane as that of the microphonearray,
`
`[0021] FIG. 6B exemplarily illustrates a table showingthe relationship of the position
`
`of each soundsensorin the circular microphone array configuration and its distance to
`
`the origin of the microphonearray, when the target sound sourceis in the sameplane as
`
`that of the microphonearray.
`
`[0022] FIG. 7A exemplarily illustrates a graphical representation of a microphone
`
`array, when the target sound sourceis in a three dimensional plane.
`
`[0023] FIG. 7B exemplarily illustrates a table showing delay between each sound
`
`sensor in a circular microphonearray configuration and the origin of the microphone
`
`array, when the target sound sourceis in a three dimensional plane.
`
`[0024] FIG. 7C exemplarily illustrates a three dimensional working spaceof the
`
`microphonearray, where the target sound signalis incident at an elevation angle ¥ <Q.
`
`[0025] FIG. 8 exemplarily illustrates a method for estimating a spatial location of the
`
`target sound signal from the target sound source by a sound source localization unit using
`
`a steered response power-phasetransform.
`
`Page7 of 200
`
`SONOS EXHIBIT1018
`
`Page 7 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`[0026] FIG. 9A exemplarily illustrates a graph showing the value of the steered
`
`response power-phase transform for every 10°.
`
`[0027] FIG. 9B exemplarily illustrates a graph representing the estimated target sound
`
`signal from the target sound source.
`
`[0028] FIG. 10 exemplarilyillustrates a system for performing adaptive beamforming
`
`by an adaptive beamforming unit.
`
`[0029] FIG. 11 exemplarilyillustrates a system for sub-band adaptivefiltering.
`
`[0030] FIG. 12 exemplarily illustrates a graphical representation showing the
`
`performanceof a perfect reconstruction filter bank.
`
`[0031] FIG. 13 exemplarilyillustrates a block diagram of a noise reduction unit that
`
`performsnoise reduction using a Wiener-filter based noise reduction algorithm.
`
`[0032] FIG. 14 exemplarily illustrates a hardware implementation of the microphone
`
`atray system.
`
`[0033] FIGS. 15A-15C exemplarily illustrate a conference phone comprising an eight-
`
`sensor microphonearray.
`
`[0034] FIG. 16A exemplarily illustrates a layout of an eight-sensor microphonearray
`
`for a conference phone.
`
`[0035] FIG. 16B exemplarily illustrates a graphical representation of eight spatial
`
`regions to which the eight-sensor microphonearray of FIG. 16A responds.
`
`Page 8 of 200
`
`SONOS EXHIBIT1018
`
`Page 8 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`[0036] FIGS. 16C-16D exemplarily illustrate computer simulations showing the
`
`steering of the directivity patterns of the eight-sensor microphonearray of FIG. 16A in
`
`the directions of 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz.
`
`[0037] FIGS. 16E-16L exemplarily illustrate graphical representations showing the
`
`directivity patterns of the eight-sensor microphonearray of FIG. 16A in each of the eight
`
`spatial regions, where each directivity pattern is an average response from 300Hz to
`
`5000Hz.
`
`[0038] FIG. 17A exemplarily illustrates a graphical representation of four spatial
`
`regions to which a four-sensor microphonearray for a wireless handheld device responds.
`
`[0039] FIGS. 17B-17I exemplarily illustrate computer simulations showing the
`
`directivity patterns of the four-sensor microphonearray of FIG. 17A with respect to
`
`azimuth and frequency.
`
`[0040] FIGS. 18A-18B exemplarily illustrate a microphonearray configuration for a
`
`tablet computer.
`
`[0041] FIG. 18C exemplarily illustrates an acoustic beam formed using the microphone
`
`array configuration of FIGS. 18A-18B according to the method and system disclosed
`
`herein.
`
`[0042] FIGS. 18D-18G exemplarily illustrate graphs showing processing results of the
`
`adaptive beamforming unit and the noise reduction unit for the microphone array
`
`configuration of FIG. 18B, in both a time domain and a spectral domain forthe tablet
`
`computer.
`
`[0043] FIGS. 19A-19F exemplarilyillustrate tables showing different microphone array
`
`configurations and the corresponding values of delay t, for the sound sensors in each of
`
`the microphonearray configurations.
`
`Page 9 of 200
`
`SONOS EXHIBIT1018
`
`Page 9 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`[0044]
`
`FIG. 1 illustrates a method for enhancing a target soundsignal from multiple
`
`sound signals. As used herein, the term “target sound signal” refers to a desired sound
`
`signal from a desired or target sound source, for example, a person’s speech that needsto
`
`be enhanced. The method disclosed herein provides 101 a microphone array system
`
`comprising an array of sound sensors positioned in an arbitrary configuration, a sound
`
`source localization unit, an adaptive beamforming unit, and a noise reduction unit. The
`
`sound source localization unit, the adaptive beamforming unit, and the noise reduction
`
`unit are in operative communication with the array of sound sensors. The microphone
`
`array system disclosed herein employs the array of sound sensors positioned in an
`
`arbitrary configuration, the sound source localization unit, the adaptive beamforming
`
`unit, and the noise reduction unit for enhancing a target sound signal by acoustic beam
`
`forming in the direction of the target sound signal in the presence of ambient noise
`
`signals.
`
`[0045] The array of sound sensors herein referred to as a “microphonearray” comprises
`
`multiple or an arbitrary number of sound sensors, for example, microphones, operating in
`
`tandem. The microphonearray refers to an array of an arbitrary number of sound sensors
`
`positionedin an arbitrary configuration. The sound sensors are transducers that detect
`
`sound and convert the soundinto electrical signals. The sound sensors are, for example,
`
`condenser microphones, piezoelectric microphones, etc.
`
`[0046] The soundsensors receive 102 sound signals from multiple disparate sound
`
`sources and directions. The target sound source that emits the target sound signal is one
`
`of the disparate sound sources. As used herein, the term “sound signals” refers to
`
`composite sound energy from multiple disparate sound sources in an environmentof the
`
`microphonearray. The sound signals comprise the target sound signal from the target
`
`sound source and the ambient noise signals. The sound sensors are positioned in an
`
`arbitrary planar configuration herein referred to as a “microphonearray configuration”,
`
`Page 10 of 200
`
`SONOS EXHIBIT1018
`
`Page 10 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`for example,a linear configuration, a circular configuration, any arbitrarily distributed
`
`coplanar array configuration, etc. By employing beamforming according to the method
`
`disclosed herein, the microphonearray provides a higher responseto the target sound
`
`signal received from a particular direction than to the sound signals from other directions.
`
`A plot of the response of the microphone array versus frequency and direction ofarrival
`
`of the sound signals is referred to as a directivity pattern of the microphonearray.
`
`[0047] The sound sourcelocalization unit estimates 103 a spatial location of the target
`
`soundsignal from the received sound signals. In an embodiment, the sound source
`
`localization unit estimates the spatial location of the target sound signal from the target
`
`sound source, for example, using a steered response power-phase transform as disclosed
`
`in the detailed description of FIG. 8.
`
`[0048] The adaptive beamforming unit performs adaptive beamforming 104 bysteering
`
`the directivity pattern of the microphonearray in a direction of the spatial location of the
`
`target sound signal, thereby enhancing the target sound signal, and partially suppressing
`
`the ambient noise signals. Beamforming refers to a signal processing technique used in
`
`the microphonearray for directional signal reception, that is, spatial filtering. This spatial
`
`filtering is achieved by using adaptive or fixed methods. Spatialfiltering refers to
`
`separating two signals with overlapping frequency contentthat originate from different
`
`spatial locations.
`
`[0049]
`
`Thenoise reduction unit performs noise reduction by further suppressing 105
`
`the ambient noise signals and thereby further enhancing the target sound signal. The
`
`noise reduction unit performsthe noise reduction, for example, by using a Wiener-filter
`
`based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an
`
`auditory transform based noise reduction algorithm, or a model based noise reduction
`
`algorithm.
`
`[0050]
`
`FIG. 2 illustrates a system 200 for enhancing a target sound signal from multiple
`
`sound signals. The system 200, herein referred to as a “microphone array system’,
`
`Page 11 of 200
`
`SONOS EXHIBIT 1018
`
`10
`
`Page 11 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`comprises the array 201 of sound sensors positioned in an arbitrary configuration, the
`
`sound source localization unit 202, the adaptive beamforming unit 203, and the noise
`
`reduction unit 207.
`
`[0051] The array 201 of sound sensors, herein referred to as the “microphone array” is
`
`in operative communication with the sound sourcelocalization unit 202, the adaptive
`
`beamforming unit 203, and the noise reduction unit 207. The microphonearray 201is,
`
`for example,a linear array of sound sensors, a circular array of sound sensors, or an
`
`arbitrarily distributed coplanar array of sound sensors. The microphonearray 201
`
`achievesdirectional gain in any preferred spatial direction and frequency band while
`
`suppressing signals from other spatial directions and frequency bands. The soundsensors
`
`receive the sound signals comprising the target sound signal and ambientnoise signals
`
`from multiple disparate sound sources, where one of the disparate sound sourcesis the
`
`target sound source that emits the target soundsignal.
`
`[0052] The sound source localization unit 202 estimates the spatial location of the target
`
`sound signal from the received sound signals. In an embodiment, the sound source
`
`localization unit 202 uses, for example, a steered response power-phasetransform, for
`
`estimating the spatial location of the target sound signal from the target sound source.
`
`[0053] The adaptive beamforming unit 203 steers the directivity pattern of the
`
`microphonearray 201 in a direction of the spatial location of the target sound signal,
`
`thereby enhancing the target sound signal and partially suppressing the ambient noise
`
`signals. The adaptive beamforming unit 203 comprises a fixed beamformer 204, a
`
`blocking matrix 205, and an adaptivefilter 206 as disclosed in the detailed description of
`
`FIG. 10. The fixed beamformer 204 performs fixed beamforming byfiltering and
`
`summing output sound signals from each of the sound sensors in the microphonearray
`
`201 as disclosed in the detailed description of FIG. 4. In an embodiment, the adaptive
`
`filter 206 is implemented as a set of sub-band adaptive filters. The adaptive filter 206
`
`comprises an analysis filter bank 206a, an adaptive filter matrix 206b, and a synthesis
`
`filter bank 206c as disclosed in the detailed description of FIG. 11.
`
`Page 12 of 200
`
`SONOS EXHIBIT 1018
`
`11
`
`Page 12 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`[0054] The noise reduction unit 207 further suppresses the ambient noise signals for
`
`further enhancing the target sound signal. The noise reduction unit 207 is, for example, a
`
`Wiener-filter based noise reduction unit, a spectral subtraction noise reduction unit, an
`
`auditory transform based noise reduction unit, or a model based noise reduction unit.
`
`[0055] FIG. 3 exemplarily illustrates a microphone array configuration showing a
`
`microphonearray 201 having N soundsensors 301 arbitrarily distributed on a circle 302
`
`with a diameter “d’”, where “N” refers to the number of sound sensors 301 in the
`
`microphone array 201. Consider an example where N = 4,that is, there are four sound
`
`sensors 301 Mo, Mj, M2, and M3in the microphonearray 201. Each of the sound sensors
`
`301 is positioned at an acute angle “®,,” from a Y-axis, where ®,> 0 and n=0,1, 2, ...N-
`
`1. In an example, the sound sensor 301 Mois positioned at an acute angle Dp from the Y-
`
`axis; the sound sensor 301 M; is positioned at an acute angle ®, from the Y-axis; the
`
`sound sensor 301 M; is positioned at an acute angle ®, from the Y-axis; and the sound
`
`sensor 301 Ms;is positioned at an acute angle ®3 from the Y-axis. A filter-and-sum
`
`beamforming algorithm determines the output “y” of the microphone array 201 having N
`
`sound sensors 301 as disclosed in the detailed description of FIG. 4.
`
`[0056] FIG. 4 exemplarily illustrates a graphical representation of the filter-and-sum
`
`beamforming algorithm for determining the output of the microphonearray 201 having N
`
`sound sensors 301. Consider an example where the target sound signal from the target
`
`sound source is at an angle 8 with a normalized frequency w. The microphonearray
`
`configuration is arbitrary in a two dimensional plane, for example, a circular array
`
`configuration where the sound sensors 301 Mo, Mi, M2,..., Mx, Mn-i of the microphone
`
`array 201 are arbitrarily positioned on a circle 302. The sound signals received by each of
`
`the sound sensors 301 in the microphone array 201 are inputs to the microphonearray
`
`201. The adaptive beamforming unit 203 employsthe filter-and-sum beamforming
`
`algorithm that applies independent weights to each of the inputs to the microphone array
`
`201 such that directivity pattern of the microphonearray 201 is steered to the spatial
`
`Page 13 of 200
`
`SONOS EXHIBIT1018
`
`12
`
`Page 13 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`location of the target sound signal as determined by the sound source localization unit
`
`202.
`
`[0057] The output “y” of the microphonearray 201 having N soundsensors 301 is the
`
`filter-and-sum of the outputs of the N sound sensors 301. Thatis,
`y= » . wx, , Where x,, is the output of the (n+1)" sound sensor 301, and w,,"
`denotesa transposeof a length-L filter applied to the (n+1)" sound sensor 301.
`
`[0058] The spatial directivity pattern H (a, 8) for the target sound signal from angle 8
`
`with normalized frequency « is definedas:
`
`
`
`Y(@) YeonOX(O29)
`H(0,0~)= =X(0,0)
`X(@,6)
`
`(1)
`
`whereXis the signal received at the origin of the circular microphone array 201 and W is
`
`the frequency responseofthe real-valued finite impulse response (FIR)filter w. If the
`
`target sound source is far enough away from the microphonearray 201, the difference
`between the signal received by the (n+1)" sound sensor301“x,” andthe origin ofthe
`
`microphonearray 201is a delay t,; that is, X,, (0,t)=X(M@O0)e1°" |
`
`[0059] FIG. 5 exemplarily illustrates distances between an origin of the microphone
`
`array 201 and the sound sensor 301 M; and the sound sensor 301 M3;in the circular
`
`microphonearray configuration, when the target sound signal is at an angle 0 from the Y-
`
`axis. The microphonearray system 200 disclosed herein can be used with an arbitrary
`
`directivity pattern for arbitrarily distributed sound sensors 301. For any specific
`
`microphonearray configuration, the parameter that is defined to achieve beamformer
`
`coefficients is the value of delay t, for each sound sensor 301. To define the value of ty,
`
`an origin or a reference point of the microphone array 201 is defined; and then the
`
`distance d,, between each sound sensor 301 and the origin is measured, and then the angle
`
`@®, of each sound sensor 301 biased from a vertical axis is measured.
`
`Page 14 of 200
`
`SONOS EXHIBIT 1018
`
`13
`
`Page 14 of 200
`
`SONOS EXHIBIT 1018
`
`
`
`[0060] For example, the angle between the Y-axis and the line joining the origin and
`
`the sound sensor 301 Mo is Bo, the angle between the Y-axis andthe line joining the
`
`origin and the sound sensor 301 M; is ®, the angle between the Y-axis andthe line
`
`joining the origin and the sound sensor 301 M2 is ©, and the angle between the Y-axis
`
`and the line joining the origin and the sound sensor 301 M3 is ®3. The distance between
`
`the origin O and the sound sensor 301 M,, and the origin O and the sound sensor 301 M;
`
`whenthe incomingtarget sound signal from the target sound sourceis at an angle 0 from
`
`the Y-axis is denoted as 1, and 73, respectively.
`
`[0061] For purposesofillustration, the detailed description refers to a circular
`
`microphonearray configuration; however, the scope of the microphonearray system 200
`
`disclosed herein is not limited to the circular microphone array configuration but may be
`
`extended to include a linear array configuration, an arbitrarily distributed coplanar array
`
`configuration, or a microphonearray configuration with any arbitrary geometry.
`
`[0062] FIG. 6A exemplarily illustrates a table showing the distance between each sound
`
`sensor 301 in a circular microphone array configuration from the origin of the
`
`microphonearray 201, when the target sound source is in the sameplaneasthat of the
`
`microphonearray 201. The distance measured in meters and the corresponding delay (t)
`
`measured in number of samples is exemplarily illustrated in FIG. 6A. In an embodiment
`
`where the target sound source that emits the target sound signalis in a two dimensional
`
`plane, the delay (t) between each of the sound sensors 301 andthe origin of the
`
`microphonearray 201 is determined as a function of distance (d) between each of the
`
`sound sensors 301 and the origin, a predefined angle (®) between each of the sound
`
`sensors 301 and a reference axis (Y) as exemplarily illustrated in FIG. 5, and an azimuth
`
`angle (8) between the reference axis (Y) and the target sound signal. The determined
`
`delay (t