`
`(12)
`
`TEPZZ_48866_B_T
`EP 1 488 661 B1
`
`(11)
`
`EUROPEAN PATENT SPECIFICATION
`
`(45) Date of publication and mention
`of the grant of the patent:
`10.12.2014 Bulletin 2014/50
`
`(21) Application number: 03713371.7
`
`(22) Date of filing: 05.02.2003
`
`(51) Int Cl.:
`H04R 3/00 (2006.01)
`
`H04R 25/00 (2006.01)
`
`(86) International application number:
`PCT/US2003/003476
`
`(87) International publication number:
`WO 2003/067922 (14.08.2003 Gazette 2003/33)
`
`(54) REDUCING NOISE IN AUDIO SYSTEMS
`
`RAUSCHVERMINDERUNG IN AUDIOSYSTEMEN
`
`REDUCTION DE BRUIT DANS DES SYSTEMES AUDIO
`
`(84) Designated Contracting States:
`DE FR GB
`
`(30) Priority: 05.02.2002 US 354650 P
`12.07.2002 US 193825
`
`(43) Date of publication of application:
`22.12.2004 Bulletin 2004/52
`
`(73) Proprietor: MH Acoustics, LLC
`Summit, NY 07901 (US)
`
`(72) Inventor: ELKO, Gary, W.
`Summit, NJ 07901 (US)
`
`(74) Representative: Madgwick, Paul Roland et al
`RUSCHKE HARTMANN MADGWICK & SEIDE
`Patent- und Rechtsanwälte
`Postfach 86 06 29
`81633 München (DE)
`
`(56) References cited:
`WO-A-95/16259
`US-A- 5 602 962
`
`JP-A- H06 269 084
`
`
`• PATENT ABSTRACTS OF JAPAN vol. 2000, no.
`22, 9 March 2001 (2001-03-09) -& JP 2001 124621
`A (MATSUSHITA ELECTRIC IND CO LTD), 11 May
`2001 (2001-05-11)
`• PATENT ABSTRACTS OF JAPAN vol. 1995, no.
`01, 28 February 1995 (1995-02-28) -& JP 06 303689
`A (OKI ELECTRIC IND CO LTD), 28 October 1994
`(1994-10-28)
`
`Note: Within nine months of the publication of the mention of the grant of the European patent in the European Patent
`Bulletin, any person may give notice to the European Patent Office of opposition to that patent, in accordance with the
`Implementing Regulations. Notice of opposition shall not be deemed to have been filed until the opposition fee has been
`paid. (Art. 99(1) European Patent Convention).
`
`Printed by Jouve, 75001 PARIS (FR)
`
`EP1 488 661B1
`
`Page 1 of 32
`
`GOOGLE EXHIBIT 1013
`
`
`
`EP 1 488 661 B1
`
`Description
`
`BACKGROUND OF THE INVENTION
`
`5
`
`Field of the Invention
`
`[0001] The present invention relates to acoustics, and, in particular, to techniques for reducing noise, such as wind
`noise, generated by turbulent airflow over microphones.
`
`10
`
`Description of the Related Art
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`[0002] For many years, wind-noise sensitivity of microphones has been a major problem for outdoor recordings. A
`related problem is the susceptibility of microphones to the speech jet, i. e., the flow of air from the talker’s mouth.
`Recording studios typically rely on special windscreen socks that either cover the microphone or are placed between
`the mouth and the microphone. For outdoor recording situations where wind noise is an issue, microphones are typically
`shielded by acoustically transparent foam or thick fuzzy materials. The purpose of these windscreens is to reduce-or
`even eliminate--the airflow over the active microphone element to reduceor even eliminate--noise associated with that
`airflow that would otherwise appear in the audio signal generated by the microphone, while allowing the desired acoustic
`signal to pass without significant modification to the microphone.
`[0003]
`In patent document US 5,602,963 there is disclosed a speech processing arrangement having at least two
`microphones. Signals from the microphones are delayed, weighted by weight factors, and summed, where the resulting
`signal is adaptively filtered to reduce noise components in the microphone signals. WO 95/16259 A discloses a noise
`reduction system that generates sums and differences of speech signals from different microphones to generate filter
`coefficients for a Wiener filter used to reduce noise in a combined speech signal. Patent abstracts of Japan vol. 2000,
`no. 22,9 March 2001 (2001-03-09) -& JP 2001 124621 A (Matsushita Electric Ind Co. Ltd), 11 May 2001 (2001-05-11)
`disclose a noise eliminating device that applies a fast Fourier transform to a main acoustic signal to predict noise
`components that are subtracted from the corresponding acoustic frequency spectrum to provide a noise elimination
`acoustic frequency spectrum.
`[0004]
`JP 06 269084 (D4) discloses a technique for controlling a filter used to reduce noise in audio signals generated
`by a microphone. In particular, in the context of Fig. 16, D4 teaches a technique for controlling the cut-off frequency of
`high-pass filter (HPF) 16 to reduce wind noise in the audio signal generated by microphone 11, where controller 33 sets
`the cut-off frequency of HPF16 based on the output of level ratio sensing circuit 32 (see abstract). Level ratio sensing
`circuit 32 senses the ratio between the level of audio signal from high-pass filter 31 and the level of the wind noise signal
`from subtraction circuit 15, where controller 33 sets the cut-off frequency for HPF16 based on the sensed ratio (see,
`especially, paragraph [0042]).
`
`SUMMARY OF THE INTENTION
`
`[0005] The present invention as defined in claims 1, 2 is related to signal processing techniques that attenuate noise,
`such as turbulent wind-noise, in audio signals without necessarily relying on the mechanical windscreens of the prior
`art. In particular, according to certain embodiments of the present invention, two or more microphones generate audio
`signals that are used to determine the portion of pickup signal that is due to wind-induced noise. These embodiments
`exploit the notion that wind-noise signals are caused by convective airflow whose speed of propagation is much less
`than that of the desired acoustic signals. As a result, the difference in the output powers of summed and subtracted
`signals of closely spaced microphones can be used to estimate the ratio of turbulent convective wind-noise propagation
`relative to acoustic propagation. Since convective turbulence coherence diminishes quickly with distance, subtracted
`signals between microphones are of similar power to summed signals. However, signals propagating at acoustic speeds
`will result in relatively large difference in the summed and subtracted signal powers. This property is utilized to drive a
`time-varying suppression filter that is tailored to reduce signals that have
`[0006] much lower propagation speeds and/or a rapid loss in signal coherence as a function of distance, e.g., noise
`resulting from relatively slow airflow.
`[0007] According to one embodiment, the present invention is a method and an audio system for processing audio
`signals generated by two or more microphones receiving acoustic signals. A signal processor determines a portion of
`the audio signals resulting from one or more of (i) incoherence between the audio signals and (ii) one or more audio-
`signal sources having propagation speeds different from the acoustic signals. A filter filters at least one of the audio
`signals to reduce the determined portion.
`[0008] According to another embodiment, the present invention is a consumer device comprising (a) two or more
`microphones configured to receive acoustic signals and to generate audio signals; (b) a signal processor configured to
`
`2
`
`Page 2 of 32
`
`
`
`EP 1 488 661 B1
`
`determine a portion of the audio signals resulting from one or more of (i) incoherence between the audio signals and (ii)
`one or more audio-signal sources having propagation speeds different from the acoustic signals; and (c) a filter configured
`to filter at least one of the audio signals to reduce the determined portion.
`[0009] According to yet another embodiment, the present invention is a method and an audio system for processing
`audio signals generated in response to a sound field by at least two microphones of an audio system. A filter filters the
`audio signals to compensate for a phase difference between the at least two microphones. A signal processor (1)
`generates a revised phase difference between the at least two microphones based on the audio signals and (2) updates,
`based on the revised phase difference, at least one calibration parameter used by the filter.
`[0010]
`In yet another embodiment, the present invention is a consumer device comprising (a) at least two microphones;
`(b) a filter configured to filter audio signals generated in response to a sound field by the at least two microphones to
`compensate for a phase difference between the at least two microphones; and (c) a signal processor configured to (1)
`generate a revised phase difference between the at least two microphones based on the audio signals; and (2) update,
`based on the revised phase difference, at least one calibration parameter used by the filter.
`
`5
`
`10
`
`15
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0011] Other aspects, features, and advantages of the present invention will become more fully apparent from the
`following detailed description, the appended claims, and the accompanying drawings in which like reference numerals
`identify similar or identical elements.
`
`20
`
`25
`
`30
`
`35
`
`40
`
`Fig. 1 shows a diagram of a first-order microphone composed of two zero-order microphones;
`Fig. 2 shows a graph of Corcos model coherence as a function of frequency for 2-cm microphone spacing and a
`convective speed of 5 m/s;
`Fig. 3 shows a graph of the difference-to-sum power ratios for acoustic and turbulent signals as a function of frequency
`for 2-cm microphone spacing and a convective speed of 5 m/s;
`Fig. 4 illustrates noise suppression using a single-channel Wiener filter;
`Fig. 5 illustrates a single-input/single-output noise suppression system that is essentially equivalent to a system
`having an array with two closely spaced omnidirectional microphones;
`Fig. 6 shows the amount of noise suppression that is applied by the system of Fig. 5 as a function of coherence
`between the two microphone signals;
`Fig. 7 shows a graph of the output signal for a single microphone before and after processing to reject turbulence
`using propagating acoustic gain settings;
`Fig. 8 shows a graph of the spatial coherence function for a diffuse propagating acoustic field for 2-cm spaced
`microphones, shown compared with the Corcos model coherence of Fig. 2 and for a single planewave;
`Fig. 9 shows a block diagram of an audio system, according to one embodiment of the present invention;
`Fig. 10 shows a block diagram of turbulent wind-noise attenuation processing using two closely spaced, pressure
`(omnidirectional) microphones, according to one implementation of the audio system of Fig. 9;
`Fig. 11 shows a block diagram of turbulent wind-noise attenuation processing using a directional microphone and
`a pressure (omnidirectional) microphone, according to an alternative implementation of the audio system of Fig. 9;
`Fig. 12 shows a block diagram of an audio system having two omnidirectional microphones, according to an alter-
`native embodiment of the present invention; and
`Fig. 13 shows a flowchart of the processing of the audio system of Fig. 12, according to one embodiment of the
`present invention.
`
`45
`
`DETAILED DESCRIPTION
`
`Differential Microphone Arrays
`
`50
`
`55
`
`[0012] A differential microphone array is a configuration of two or more audio transducers or sensors (e.g., microphones)
`whose audio output signals are combined to provide one or more array output signals. As used in this specification, the
`term "first-order" applies to any microphone array whose sensitivity is proportional to the first spatial derivative of the
`acoustic pressure field. The term "nth-order" is used for microphone arrays that have a response that is proportional to
`a linear combination of the spatial derivatives up to and including n. Typically, differential microphone arrays combine
`the outputs of closely spaced transducers in an alternating sign fashion.
`[0013] Although realizable differential arrays only approximate the true acoustic pressure differentials, the equations
`for the general-order spatial differentials provide significant insight into the operation of these systems. To begin, the
`case for an acoustic planewave propagating with wavevector k is examined. The acoustic pressure field for the planewave
`case can be written according to Equation (1) as follows:
`
`3
`
`Page 3 of 32
`
`
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`EP 1 488 661 B1
`
`where Po is the planewave amplitude, k is the acoustic wavevector, r is the position vector relative to the selected origin,
`and ω is the angular frequency of the planewave. Dropping the time dependence and taking the nth-order spatial derivative
`yields Equation (2) as follows:
`
`where θ is the angle between the wavevector k and the position vector r, r = iri, and k = iki = 2π/λ, where λ is the acoustic
`wavelength. The planewave solution is valid for the response to sources that are "far" from the microphone array, where
`"far" means distances that are many times the square of the relevant source dimension divided by the acoustic wavelength.
`The frequency response of a differential microphone is a high-pass system with a slope of 6n dB per octave. In general,
`to realize an array that is sensitive to the nth derivative of the incident acoustic pressure field, m nth-order transducers
`are required, where, m+p-1=n. For example, a first-order differential-microphone requires two zero-order sensors (e.g.,
`two pressure-sensing microphones).
`[0014] For a planewave with amplitude P0 and wavenumber k incident on a two-element differential array, as shown
`in Fig. 1, the output can be written according to Equation (3) as follows:
`
`where d is the inter-element spacing and the subscript indicates a first-order differential array. If it is now assumed that
`the spacing d is much smaller than the acoustic wavelength, Equation (3) can be rewritten as Equation (4) as follows:
`
`[0015] The case where a delay is introduced between these two zero-order sensors is now examined. For a planewave
`incident on this new array, the output can be written according to Equation (5) as follows:
`
`where τ is equal to the delay applied to the signal from one sensor, and the substitution k=ω/c has been made, where
`c is the speed of sound. If a small spacing is again assumed (kd h π and ωτ h π), then Equation (5) can be written as
`Equation (6) as follows:
`
` One thing to notice about Equation (6) is that the first-order array has first-order high-pass frequency dependence. The
`term in the parentheses in Equation (6) contains the array directional response.
`[0016] Since nth-order differential transducers have responses that are proportional to the nth power of the wavenumber,
`these transducers are very sensitive to high wavenumber acoustic propagation. One acoustic field that has high-wave-
`
`4
`
`Page 4 of 32
`
`
`
`EP 1 488 661 B1
`
`number acoustic propagation is in turbulent fluid flow where the convective velocity is much less than the speed of sound.
`As a result, prior-art differential microphones have typically required careful shielding to minimize the hypersensitivity
`to wind turbulence.
`
`5
`
`Turbulent Wind-Noise Models
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`[0017] The subject of modeling turbulent fluid flow has been an active area of research for many decades. Most of
`the research has been in underwater acoustics for military applications. With the rapid growth of commercial airline
`carriers, there has been a great amount of work related to turbulent flow excitation of aircraft fuselage components. Due
`to the complexity of the equations of motion describing turbulent fluid flow, only rough approximations and relatively
`simple statistical models have been suggested to describe this complex chaotic fluid flow. One model that describes the
`coherence of the pressure fluctuations in a turbulent boundary layer along the plane of flow is described in G.M. Corcos,
`The structure of the turbulent pressure field in boundary layer flows, J. Fluid Mech., 18: pp 353-378, 1964. Although this
`model was developed for turbulent pressure fluctuation over a rigid half-plane, the simple Corcos model can be used to
`express the amount of spatial filtering of the turbulent jet from a talker. Thus, this model is used to predict the spatial
`coherence of the pressure-fluctuation turbulence for both speech jets as well as free-space turbulence.
`[0018] The spatial characteristics of the pressure fluctuations can be expressed by the space-frequency cross-spectrum
`function G according to Equation (7) as follows:
`
`where R is the spatial cross-correlation function between the two microphone signals, ω is the angular frequency, and
`ψ is the general displacement variable which is directly related to the distance between measurement points. The
`coherence function γ is defined as the normalized cross-spectrum by the auto power-spectrum of the two channels
`according to Equation (8) as follows:
`
`It is known that large-scale components of the acoustic pressure field lose coherence slowly during the convection with
`free-stream velocity U, while the small-scale components lose coherence in distances proportional to their wavelengths.
`Corcos assumed that the stream-wise coherence decays spatially as a function of the similarity variable ωr/Uc, where
`Uc is the convective speed and is typically related to the free-stream velocity U as Uc = 0.8U. The Corcos model can be
`mathematically stated by Equation (9) as follows:
`
`where α is an experimentally determined decay constant (e.g., α=0.125), and r is the displacement (distance) variable.
`A plot of this function is shown in Fig. 2. The rapid decay of spatial coherence results in the difference in powers between
`the sums and differences of closely-spaced pressure (zero-order) microphones to be much smaller than for an acoustic
`planewave propagating along the microphone array axis. As a result, it is possible to detect whether the acoustic signals
`transduced by the microphones are turbulent-like or propagating acoustic signals by comparing the sum and difference
`signal powers. Fig. 3 shows the difference-to-sum power ratios (i.e., the ratio of the difference signal power to the sum
`signal power) for acoustic and turbulent signals for a pair of omnidirectional microphones spaced at 2 cm in a convective
`fluid flow propagating at 5 m/s. It is clearly seen in this figure that there is a relatively wide difference between the desired
`acoustic and turbulent difference-to-sum power ratios. The ratio difference becomes more pronounced at low frequencies
`since the differential microphone output for desired acoustic signals rolls off at -6dB/octave, while the predicted, undesired
`
`5
`
`Page 5 of 32
`
`
`
`EP 1 488 661 B1
`
`turbulent component rolls off at a much slower rate.
`[0019]
`If sound arrives from off-axis from the microphone array, the difference-to-sum power ratio becomes even
`smaller. (It has been assumed that the coherence decay is similar in directions that are normal to the flow). The closest
`the sum and difference powers come to each other is for acoustic signals propagating along the microphone axis (e.g.,
`when θ=0 in Fig. 1). Therefore, the power ratio for acoustic signals will be less than or equal to the power ratio for acoustic
`signals arriving along the microphone axis. This limiting approximation is important to the present invention’s detection
`and resulting suppression of signals that are identified as turbulent.
`
`Single-Channel Wiener Filter
`
`[0020]
`It was shown in the previous section that one way to detect turbulent energy flow over a pair of closely-spaced
`microphones is to compare the scalar sum and difference signal power levels. In this section, it is shown how to use the
`measured power ratio to suppress the undesired wind-noise energy.
`[0021] One common technique used in noise reduction for single input systems is the well-known technique of spectral
`subtraction. See, e.g., S. F. Boll, Suppression of acoustic noise in speech using spectral subtraction, IEEE Trans. Acoust.
`Signal Proc., vol. ASSP-27, Apr. 1979. The basic premise of the spectral subtraction algorithm is to parametrically
`estimate the optimal Wiener filter for the desired speech signal. The problem can be formulated by defining a noise-
`corrupted speech signal y(n) according to Equation (10) as follows:
`
`where s(n) is the desired signal and v(n) is the noise signal.
`[0022] Fig. 4 illustrates noise suppression using a single-channel Wiener filter. The optimal filter is a filter that, when
`convolved with the noisy signal y(n), yields the closest (in the mean-square sense) approximation to the desired signal
`s(n). This can be represented in equation form according to Equation (11) as follows:
`
`^
`where " * " denotes convolution. The optimal filter that minimizes the mean-square difference between s(n) and s(n) is
`the Wiener filter. In the frequency domain, the result is given by Equation (12) as follows:
`
`where Gyz(ω) is the cross-spectrum between the signals s(n) and y(n), and Gyy(ω) is the auto power-spectrum of the
`signal y(n). Since the noise and desired signals are assumed to be uncorrelated, the result can be rewritten according
`to Equation (13) as follows:
`
`[0023] Rewriting Equation (11) into the frequency domain and substituting terms yields Equation (14) as follows:
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`6
`
`Page 6 of 32
`
`
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`EP 1 488 661 B1
`
`This result is the basic equation that is used in most spectral subtraction schemes. The variations in spectral subtrac-
`tion/spectral suppression algorithms are mostly based on how the estimates of the auto power-spectrums of the signal
`and noise are made.
`[0024] When speech is the desired signal, the standard approach is to use the transient nature of speech and assume
`a stationary (or quasi-stationary) noise background. Typical implementations use short-time Fourier analysis-and-syn-
`thesis techniques to implement the Wiener filter. See, e.g., E. J. Diethorn, "Subband Noise Reduction Methods," Acoustic
`Signal Processing for Telecommunication, S. L. Gay and J. Benesty, eds., Kluwer Academic Publishers, Chapter 9, pp.
`155-178. Mar. 2000. Since both speech and turbulent noise excitation are not-stationary processes, one would have to
`implement suppression schemes that are capable of tracking time-varying signals. As such, time-varying filters should
`be implemented. In the frequency domain, this can be accomplished by using short-time Fourier analysis and synthesis
`or filter-bank structures.
`
`Multi-Channel Wiener Filter
`
`[0025] The previous section discussed the implementation of the single-channel Wiener filter. However, the use of
`microphone arrays allows for the possibility of having multiple channels. A relatively simple case is a first-order differential
`microphone that utilizes two closely-space omnidirectional microphones. This arrangement can be seen to be essentially
`equivalent to a single-input/single-output system as shown in Fig. 5, where the desired "noise-free" signal is shown as
`z(n). It is assumed that the noise signals at both microphones are uncorrelated, and thus the two noises can be added
`equivalently as a single noise source. If the added noise signal is defined as v(n) = v1(n) + v2(n), then the output from
`the second microphone can be written according to Equation (15) as follows:
`
`[0026] From the previous definition of the coherence function, it can be shown that the output noise spectrum is given
`by Equation (16) as follows:
`
`and the coherent output power is given by Equation (17) as follows:
`
`[0027] Thus the signal-to-noise ratio is given by Equation (18) as follows:
`
`[0028] Using the expression for the Wiener filter given by Equation (13) suggests a simple Wiener-type spectral
`suppression algorithm according to Equation (19) as follows:
`
`[0029] Fig. 6 shows the amount of noise suppression that is applied as a function of coherence between the two
`
`7
`
`Page 7 of 32
`
`
`
`EP 1 488 661 B1
`
`microphone signals.
`[0030] One major issue with implementing a Wiener noise reduction scheme as outlined above is that typical acoustic
`signals are not stationary random processes. As a result, the estimation of the coherence function should be done over
`short time windows so as to allow tracking of dynamic changes. This problem turns out to be substantial when dealing
`with turbulent wind-noise that is inherently highly non-stationary. Fortunately, there are other ways to detect incoherent
`signals between multi-channel microphone systems with highly non-stationary noise signals. One way that is effective
`for wind-noise turbulence, slowly propagating signals, and microphone self-noise, is described in the next section.
`[0031]
`It is straightforward to extend the two-channel results presented above to any number of channels by the use
`of partial coherence functions that provide a measure of the linear dependence between a collection of inputs and
`outputs. A multi-channel least-squares estimator can also be employed for the signals that are linearly related between
`the channels.
`
`Wind-Noise Suppression
`
`[0032] The goal of turbulent wind-noise suppression is to determine what frequency components are due to turbulence
`(noise) and what components are desired acoustic signal. Combining the results of the previous sections indicates how
`to proceed. The noise power estimation algorithm is based on the difference in the powers of the sum and difference
`signals. If these differences are much smaller than the maximum predicted for acoustic signals (i.e., signals propagating
`along the axis of the microphones), then the signal may be declared turbulent and used to update the noise estimation.
`The gain that is applied can be the Wiener gain as given by Equations (14) and (19), or a weighting (preferably less than
`1) that can be uniform across frequency. In general, the gain can be any desired function of frequency.
`[0033] One possible general weighting function would be to enforce the difference-to-sum power ratio that would exist
`for acoustic signals that are propagating along the axis of the microphones. The fluctuating acoustic pressure signals
`traveling along the microphone axis can be written for both microphones as follows:
`
`where τs is the delay for the propagating acoustic signal s(t), τv is the delay for the convective or slow propagating waves,
`and n1(t) and n2(t) represent microphone self-noise and/or incoherent turbulent noise at the microphones. If the signals
`are represented in the frequency domain, the power spectrum of the pressure sum (p1(t) + p2(t)) and difference signals
`(p1(t) - p2(t)) can be written as follows:
`
`and,
`
`[0034] The ratio of these factors (denoted as PR) gives the expected power ratio of the difference and sum signals
`between the microphones as follows:
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`8
`
`Page 8 of 32
`
`
`
`EP 1 488 661 B1
`
`where γc is the turbulence coherence as measured or predicted by the Corcos or other turbulence model, ϒ(ω) is the
`RMS power of the turbulent noise, and N1 and N2 represent the RMS power of the independent noise at the microphones
`due to sensor self-noise. For turbulent flow where the convective wave speed is much less than the speed of sound,
`the power ratio will be much less (by approximately the ratio of propagation speeds) and thereby moves the power ratio
`to unity. Also, as discussed earlier, the convective turbulence spatial correlation function decays rapidly, and this term
`becomes dominant when turbulence (or independent sensor self-noise is present) and thereby moves the power ratio
`towards unity. For a purely propagating acoustic signal traveling along the microphone axis, the power ratio is as follows:
`
`[0035] For general orientation of a single plane-wave where the angle between the planewave and the microphone
`axis is θ,
`
`[0036] The results shown in Equations (24)-(25) lead to an algorithm for suppression of airflow turbulence and sensor
`self-noise. The rapid decay of spatial coherence or large difference in propagation speeds, results in the relative powers
`between the sums and differences of the closely spaced pressure (zero-order) microphones to be much smaller than
`for an acoustic planewave propagating along the microphone array axis. As a result, it is possible to detect whether the
`acoustic signals transduced by the microphones are turbulent-like noise or propagating acoustic signals by comparing
`the sum and difference powers.
`[0037] Fig. 3 shows the difference-to-sum power ratio for a pair of omnidirectional microphones spaced at 2 cm in a
`convective fluid flow propagating at 5 m/s. It is clearly seen in this figure that there is a relatively wide difference between
`the acoustic and turbulent sum-difference power ratios. The ratio differences become more pronounced at low frequencies
`since the differential microphone rolls off at-6dB/octave, where the predicted turbulent component rolls off at a much
`slower rate.
`[0038]
`If sound arrives from off-axis from the microphone array, the ratio of the difference-to-sum power levels becomes
`even smaller as shown in Equation (25). Note that it has been assumed that the coherence decay is similar in directions
`that are normal to the flow. The closest the sum and difference powers come to each other is for acoustic signals
`propagating along the microphone axis. Therefore, if acoustic waves are assumed to be propagating along the microphone
`axis, the power ratio for acoustic signals will be less than or equal to acoustic signals arriving along the microphone axis.
`This limiting approximation is the key to preferred embodiments of the present invention relating to noise detection and
`the resulting suppression of signals that are identified as turbulent and/or noise. The proposed suppression gain SG(ω)
`can thus be stated as follows: If the measured ratio exceeds that given by Equation (25), then the output signal power
`is reduced by the difference between the measured power ratio and that predicted by Equation (25). The equation that
`implements this gain is as follows:
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`9
`
`Page 9 of 32
`
`
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`EP 1 488 661 B1
`
`where PRm(ω) is the measured sum and difference signal power ratio.
`[0039] Fig. 7 shows the signal output of one of the microphone pair signals before and after applying turbulent noise
`suppression using the weighting gain as given in Equation (25). The turbulent noise signal was generated by softly
`blowing across the microphone after saying the phrase "one, two." The reduction in turbulent noise is greater than 20
`dB. The actual suppression was limited to 25 dB since it was conjectured that this would be reasonable and that
`suppression artifacts might be audible if the suppression were too large. It is easy to see the acoustic signals corresponding
`to the words "one" and "two." This allows one to compare the before and after processing visually in the figure. One
`reason that the proposed suppression technique is so effective for flow turbulence is due to the fact that these signals
`have large low frequencies power, a region where PRa is small.
`[0040] Another implementation that is directly related to the Wiener filter solution is to utilize the estimated coherence
`function between pairs of microphones to generate a coherence-based gain function to attenuate turbulent components.
`As indicated by Fig. 2, the coherence between microphones decays rapidly for turbulent boundary layer flow as frequency
`increases. For a diffuse sound field (e.g., uncorrelated sound arriving with equal power from all directions), the spatial
`coherence function is real and can be shown to be equal to Equation (27) as follows:
`
`where r=d is the microphone spacing. The coherence function for a single propagating planewave is unity over the entire
`frequency range. As more uncorrelated planewaves arriving from different directions are incorporated, the spatial co-
`herence function converges to the value for the diffuse case as given in Equation (16). A plot of the diffuse coherence
`function of Equation (27) is shown in Fig. 8. For comparison purposes, the predicted Corcos coherence functions for 5
`m/s flow and for a single planewave are also shown.
`[0041] As indicated by Fig. 8, there is a relatively large difference in the coherence values for a propagating sound
`field and a turbulent fluid flow (5 m/s for this case). The large difference suggests that one could weight the resulting
`spectrum of the microphone output by either the coherence function itself or some weighted or processed version of the
`coherence. Since the coherence for propagating acoustic waves is essentially unity, this weighting scheme will pass the
`desired propagating acoustic signals. For turbulent propagation, the coherence (or some processed version) is low and
`weighting by this function will diminish the system output.
`
`Wind-Noise Sensitivity in Differential Microphones
`
`[0042] As described in the section entitled "Differential Microphone Arrays," the sensitivity of differential mi