`
`
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`_____________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`_____________________________
`
`
`Google LLC,
`Petitioner
`
`v.
`
`Jawbone Innovations, LLC,
`Patent Owner
`
`_____________________________
`
`Case IPR2022-01059
`
`U.S. Patent No. 10,779,080
`_____________________________
`
`
`DECLARATION OF JEFFREY S. VIPPERMAN, PH.D.
`
`
`Page 1 of 135
`
`
`
`Amazon v. Jawbone
`U.S. Patent 10,779,080
`Amazon Ex. 1003
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`TABLE OF CONTENTS
`
`
`
`
`INTRODUCTION .......................................................................................... 1
`I.
`SUMMARY OF OPINIONS .......................................................................... 1
`II.
`III. BACKGROUND AND QUALIFICATIONS ................................................ 2
`A.
`Education .............................................................................................. 2
`B.
`Experience ............................................................................................ 3
`C.
`Compensation ....................................................................................... 6
`IV. MATERIALS CONSIDERED ....................................................................... 7
`V.
`LEGAL STANDARDS .................................................................................. 9
`A.
`Claim Construction ............................................................................ 10
`B.
`Level of Ordinary Skill ...................................................................... 10
`C. Obviousness ........................................................................................ 11
`VI. THE ’080 PATENT ...................................................................................... 13
`A. Overview of Disclosure ...................................................................... 13
`B.
`Prosecution History ............................................................................ 18
`VII. ANALYSIS OF GROUNDS OF UNPATENTABILITY............................ 19
`A.
`Claim Construction ............................................................................ 19
`B. Ground 1: Claims 1-3, 5-9, 11-14, and 16-20 Are Rendered
`Obvious over Ikeda in View of McCowan and Kanamori................. 20
`1.
`Overview of Ikeda .................................................................... 20
`2.
`Overview of McCowan ............................................................ 26
`3.
`Overview of Kanamori ............................................................ 31
`
`i
`
`Page 2 of 135
`
`
`
`4.
`
`5.
`6.
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`The Ikeda-McCowan-Kanamori Combination ........................ 35
`a.
`The Ikeda-McCowan-Kanamori main microphone
`has the same formulation for signal processing as
`the virtual microphone V1 described in the ’080
`patent, and the Ikeda-McCowan-Kanamori
`reference microphone has the same formulation for
`signal processing as the virtual microphone V2
`described in the ’080 patent ........................................... 52
`Simulations of Virtual Microphone Responses ....................... 56
`Claim 7 ..................................................................................... 67
`a.
`[7a] “A system, comprising: a first virtual
`microphone formed from a first combination of a
`first microphone signal and a second microphone
`signal, wherein the first microphone signal is
`generated by a first physical microphone and the
`second microphone signal is generated by a second
`physical microphone;” ................................................... 67
`[7b] “a second virtual microphone formed from a
`second combination of the first microphone signal
`and the second microphone signal, wherein the
`second combination is different from the first
`combination,” ................................................................. 72
`[7c] “wherein the first virtual microphone has a
`first linear response to speech and first linear
`response to noise,” ......................................................... 75
`[7d] “the first linear response to speech being
`substantially similar across a plurality of
`frequencies for a speech source located within a
`predetermined angle relative to an axis of the
`microphone array and devoid of a null,” ....................... 78
`[7e] “wherein the second virtual microphone has a
`second linear response to speech that has a single
`
`e.
`
`ii
`
`b.
`
`c.
`
`d.
`
`Page 3 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`f.
`
`g.
`
`h.
`
`i.
`j.
`
`null oriented in a direction toward a source of the
`speech and a second linear response to noise,” ............. 81
`[7f] “wherein the second linear response to noise is
`substantially similar to the first linear response to
`noise,” ............................................................................ 84
`[7g] “one or both of the first linear response to
`noise and the second linear response to noise being
`non-zero in a direction toward a source of noise,
`and” ................................................................................ 85
`[7h] “the second linear response to speech is
`substantially dissimilar to the first linear response
`to speech,” ...................................................................... 87
`[7i] “wherein the speech is human speech; and” ........... 88
`[7j] “an adaptive noise removal application
`coupled to the first and second virtual microphones
`and generating denoised output signals by forming
`a plurality of combinations of signals output from
`the first virtual microphone and the second virtual
`microphone, by filtering and summing the plurality
`of combinations of signals in the time domain, and
`by a varying linear transfer function between the
`plurality of combinations of signals,” ........................... 88
`[7k] “wherein the denoised output signals include
`less acoustic noise than acoustic signals received at
`the first and second physical microphones.” ................. 94
`Claim 8: “The system of claim 7 and further comprising:
`a microphone array, the first and second physical
`microphones positioned m [sic] the microphone array.” ......... 95
`Claim 9: “The system of claim 7, wherein the single null
`is a region of the second linear response to speech having
`a measured response level that is lower than the
`measured response level of any other region of the
`second linear response to speech.” .......................................... 96
`
`k.
`
`7.
`
`8.
`
`iii
`
`Page 4 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`9.
`
`Claim 11: “The system of claim 7 and further
`comprising: a communications channel coupled with the
`processing component and including one or more of a
`wireless channel, a wired channel, and a hybrid
`wireless/wired channel.” .......................................................... 97
`10. Claim 12: “The system of claim 11 and further
`comprising: a communication device wirelessly coupled
`with the wireless channel of the communications
`channel.” .................................................................................. 98
`11. Claim 13: “The system of claim 7, wherein the second
`microphone signal is multiplied by a ratio, wherein the
`ratio is a ratio of a third distance to a fourth distance, the
`third distance being between the first physical
`microphone and the speech source and the fourth
`distance being between the second physical microphone
`and the speech source.” ............................................................ 99
`12. Claim 14 ................................................................................. 101
`a.
`[14a] “A system, comprising: a first virtual
`microphone comprising a first combination of a
`first microphone signal and a second microphone
`signal,” ......................................................................... 101
`[14b] “the first virtual microphone having a first
`linear response to speech and a first linear response
`to noise,” ...................................................................... 101
`[14c] “the first linear response to speech being
`substantially similar across a plurality of
`frequencies for a speech source located within a
`predetermined angle relative to an axis of a
`microphone array,” ...................................................... 101
`[14d] “wherein the first microphone signal is
`output from a first physical microphone and the
`second microphone signal is output from a second
`physical microphone;” ................................................. 102
`
`b.
`
`c.
`
`d.
`
`iv
`
`Page 5 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`[14e] “a second virtual microphone comprising a
`second combination of the first microphone signal
`and the second microphone signal,” ............................ 102
`[14f] “the second virtual microphone having a
`second linear response to speech and a second
`linear response to noise,” ............................................. 102
`[14g] “the second linear response to noise being
`substantially similar to the first linear response to
`noise,” .......................................................................... 102
`[14h] “one or both of the first linear response to
`noise and the second linear response to noise being
`non-zero in a direction toward a source of noise,
`and” .............................................................................. 103
`[14i] “the second linear response to speech being
`substantially dissimilar to the first linear response
`to speech,” .................................................................... 103
`[14j] “wherein the second combination is different
`from the first combination,” ........................................ 103
`[14k] “wherein the first virtual microphone and the
`second virtual microphone are distinct virtual
`directional microphones; and” ..................................... 103
`[14l] “a processing component coupled to the first
`and second virtual microphones, the processing
`component including an adaptive noise removal
`application receiving acoustic signals from the first
`virtual microphone and the second virtual
`microphone, filtering and summing the acoustic
`signals in the time domain, applying a varying
`linear transfer function between the acoustic
`signals, and generating an output signal,” ................... 105
`[14m] “wherein the output signal is a denoised
`acoustic signal.” ........................................................... 107
`
`e.
`
`f.
`
`g.
`
`h.
`
`i.
`
`j.
`
`k.
`
`l.
`
`m.
`
`v
`
`Page 6 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`13. Claim 16: “The system of claim 14 and further
`comprising: a communications channel coupled with the
`processing component and including one or more of a
`wireless channel, a wired channel, and a hybrid
`wireless/wired channel.” ........................................................ 107
`14. Claim 17: “The system of claim 16 and further
`comprising: a communication device wirelessly coupled
`with the wireless channel of the communications
`channel.” ................................................................................ 107
`15. Claim 18: “The system of claim 14, wherein the acoustic
`signals from the first virtual microphone, the second
`virtual microphone or both are delayed.” .............................. 108
`16. Claim 19: “The system of claim 18, wherein the delay is
`raised to a power that is proportional to a time difference
`between arrival of the speech at the first virtual
`microphone and arrival of the speech at the second
`virtual microphone.” .............................................................. 110
`17. Claim 20: “The system of claim 19, wherein the power is
`proportional to a sampling frequency multiplied by a
`quantity equal to a third distance subtracted from a fourth
`distance, the third distance being between a first physical
`microphone and the speech source, the fourth distance
`being between a second physical microphone and the
`speech source, and the first and second physical
`microphones are positioned in the microphone array.” ......... 113
`18. Claim 1 ................................................................................... 116
`a.
`[1a] “A system, comprising: a microphone array
`including a first physical microphone outputting a
`first microphone signal and a second physical
`microphone outputting a second microphone
`signal;” ......................................................................... 116
`[1b] “a processing component coupled to the
`microphone array and generating a virtual
`
`b.
`
`vi
`
`Page 7 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`c.
`
`d.
`
`microphone array including a first virtual
`microphone and a second virtual microphone,” .......... 116
`[1c] “the first virtual microphone including a first
`combination of the first microphone signal and the
`second microphone signal, the second virtual
`microphone including a second combination of the
`first microphone signal and the second microphone
`signal, wherein the second combination is different
`from the first combination,” ........................................ 117
`[1d] “wherein the first virtual microphone and the
`second virtual microphone have substantially
`similar responses to noise and substantially
`dissimilar responses to speech; and” ........................... 117
`[1e] “an adaptive noise removal application
`coupled to the processing component and
`generating denoised output signals by forming a
`plurality of combinations of signals output from
`the first virtual microphone and the second virtual
`microphone, by filtering and summing the plurality
`of combinations of signals in the time domain, and
`by a varying linear transfer function between the
`plurality of combinations of signals,” ......................... 118
`[1f] “wherein the denoised output signals include
`less acoustic noise than acoustic signals received at
`the microphone array.” ................................................ 118
`19. Claim 2: “The system of claim 1, wherein the acoustic
`noise comprises noise content and the acoustic signals
`comprise speech content.” ..................................................... 119
`20. Claim 3: “The system of claim 2, wherein the speech
`content comprises human speech.” ........................................ 119
`21. Claim 5: “The system of claim 1 and further comprising:
`a communications channel coupled with the processing
`component and including one or more of a wireless
`
`e.
`
`f.
`
`vii
`
`Page 8 of 135
`
`
`
`channel, a wired channel, and a hybrid wireless/wired
`channel.” ................................................................................ 120
`22. Claim 6: “The system of claim 5 and further comprising:
`a communication device wirelessly coupled with the
`wireless channel of the communications channel.” ............... 121
`C. Ground 2: Claims 4, 10, and 15 are rendered obvious over
`Ikeda in view of McCowan, Kanamori, and Yang .......................... 121
`1.
`Overview of Yang .................................................................. 121
`2.
`Claim 10: “The system of claim 7 and further
`comprising: a voice activity detector (VAD) coupled with
`the processing component and operative to generate
`voice activity signals.” ........................................................... 122
`Claim 15: “The system of claim 14 and further
`comprising: a voice activity detector (VAD) coupled with
`the processing component and operative to generate
`voice activity signals.” ........................................................... 124
`Claim 4: “The system of claim 1 and further comprising:
`a voice activity detector (VAD) coupled with the
`processing component and operative to generate voice
`activity signals.” ..................................................................... 124
`VIII. CONCLUSION ........................................................................................... 125
`
`
`3.
`
`4.
`
`
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`viii
`
`Page 9 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`I, Jeffrey S. Vipperman, declare as follows:
`
`I.
`
`INTRODUCTION
`1.
`I have been retained as an independent expert by Google LLC
`
`(“Petitioner” or “Google”) in connection with an inter partes review of U.S. Patent
`
`No. 10,779,080 (“the ’080 patent”) (Ex. 1001). I have prepared this declaration in
`
`connection with Google’s petition.
`
`2.
`
`Specifically, this document contains my opinions about the
`
`technology claimed in claims 1-20 of the ’080 patent (“the challenged claims”) and
`
`Google’s grounds of unpatentability for these claims.
`
`II.
`
`SUMMARY OF OPINIONS
`3.
`This declaration considers the challenged claims of the ’080 patent.
`
`Below I set forth the opinions I have formed, the conclusions I have reached, and
`
`the bases for these opinions and conclusions.
`
`4.
`
`In forming my opinions, I have assumed that the priority date of the
`
`’080 patent is June 13, 2007, which is the filing date of U.S. Provisional Patent
`
`Application No. 60/934,551 (“the ’551 application”), as listed on the cover page of
`
`the ’080 patent. Ex. 1001, Cover. I understand the ’080 patent claims priority to the
`
`’551 application. Ex. 1001, Cover.
`
`5.
`
`Based on my experience, knowledge of the art, analysis of the
`
`asserted grounds and references, and understanding a person of ordinary skill in the
`
`1
`
`Page 10 of 135
`
`
`
`art (“POSITA”) would have had of the claims, it is my opinion that the challenged
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`claims of the ’080 patent would have been obvious to a person of ordinary skill in
`
`the art as of 2007, based on the asserted grounds.
`
`III. BACKGROUND AND QUALIFICATIONS
`6.
`I believe that I am well qualified to serve as a technical expert in this
`
`matter based upon my educational and work experience, which I summarize below.
`
`I understand that my curriculum vitae, which includes a more detailed summary of
`
`my background, experience, patents, and publications, is attached as Ex. 1004.
`
`A. Education
`7.
`I received my Ph.D. in Mechanical Engineering from Duke University
`
`in 1997. Previously, I obtained Master of Science and Bachelor of Science degrees
`
`in Mechanical Engineering from the Virginia Polytechnic Institute and State
`
`University (“Virginia Tech”) in 1992 and 1990, respectively. My dissertation at
`
`Duke was titled “Adaptive Piezoelectric Sensoriactuators for Multivariable
`
`Structural Acoustic Control.” My dissertation addressed the development of a
`
`hybrid analog/digital circuit and adaptation method to permit piezoelectric
`
`transducers to be used simultaneously as a sensor and an actuator. Doing so
`
`provides an array of truly “co-located” sensor/actuator pairs with minimum phase,
`
`such that stability of the multichannel feedback system is greatly enhanced. These
`
`were demonstrated for active structural acoustic control.
`
`2
`
`Page 11 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`Experience
`I am a Professor of Mechanical Engineering, Bioengineering, and
`
`B.
`8.
`
`Communication Sciences and Disorders. I also currently serve as Vice Chair of the
`
`Mechanical Engineering and Materials Science Department at the University of
`
`Pittsburgh.
`
`9.
`
`I first began research in acoustics and sound systems in 1989 as an
`
`undergraduate student. My masters research concerned adaptive feedforward
`
`control of broadband structural vibration, and my Ph.D. research concerned the
`
`development of arrays of self-sensing piezoelectric transducers that could be used
`
`for active structural-acoustic control. I have also developed a number of algorithms
`
`for active control of noise and vibration.
`
`10. My acoustics research has included a mix of theory, analytical and
`
`numerical modeling, and measurement of acoustic and vibration systems. Aside
`
`from the previously mentioned array research, my acoustics research has included
`
`transducer and controls development, transducer modeling/fabrication/testing,
`
`analog/digital signal processing, embedded systems, active and passive noise and
`
`vibration control, development of various types of metamaterials (e.g., phononic
`
`crystals, resonant lattices, layered media, and pentamode materials) for acoustical
`
`filtering and cloaking, development of noise classifiers to discern types of military
`
`noise or for incorporation into surgical devices as surgical aids, development of
`
`3
`
`Page 12 of 135
`
`
`
`thermoacoustic engines, refrigerators, and sensors (e.g., a wireless, “in-core”
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`thermoacoustic sensor that can measure temperature and neutron flux inside a
`
`nuclear reactor). Additional topics of my research include developing structural
`
`acoustic models (i.e., concerned with sound radiation from vibrating structures) of
`
`sound transmission through finite cylinders, various methods of passive and active
`
`control of noise, vibration, and structural-acoustic radiation (i.e., controlling sound
`
`radiation of a vibrating structure by introducing additional vibrations to make it an
`
`inefficient radiator), hearing loss prevention, and modeling of ear response and
`
`damage to the inner ear for impulsive and ultrasound sources. During the early
`
`stages of the microelectromechanical systems (MEMS) revolution, I worked on
`
`producing some of the earliest silicon-on-insulator (MEMS) microphones through
`
`electronic fabrication methods.
`
`11. As a professor, I have developed and taught three graduate courses
`
`directly related to acoustics and signal processing, including “Measurement and
`
`Analysis of Vibroacoustic Systems,” “Fundamentals of Acoustics and Vibration,”
`
`and “Measurement and Analysis of Random Data from Dynamical Systems.” The
`
`latter two courses cover acoustical arrays. I have also taught three mechanical
`
`measurements courses, a dynamic systems and introductory undergraduate and
`
`graduate mechanical vibrations course, and an advanced (Ph.D. candidate level)
`
`vibrations course, as well as related courses such as controls, undergraduate and
`
`4
`
`Page 13 of 135
`
`
`
`graduate dynamics, kinematics, mechanical measurements, and electrical circuits.
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`Further, I have developed and given a short course at the American Controls
`
`Conference on “Active Control of Sound, Vibration, and Structural Acoustics,” as
`
`well as two other short courses for local industry on “Acoustical Theory and
`
`Measurements” and “Noise and Vibration Measurements.”
`
`12.
`
`I also have a consulting business (Blue Ridge Consulting) and am
`
`Vice President of Atlas Medtech, LLC, a University of Pittsburgh licensed startup
`
`company.
`
`13.
`
`I have worked on Department of Defense (“DoD”) projects as a
`
`Principal Investigator and Co-Principal Investigator on projects that involve
`
`acoustic arrays. In one project, a microphone array and cross-correlation methods
`
`(time difference of arrival or TDOA methods) were used to determine the bearing
`
`angle for acoustic plane waves associated with various forms of military and
`
`natural noise. Multiple arrays were used to triangulate the location of the noise
`
`source. In conjunction, we developed machine learning algorithms to classify the
`
`noise source, which provided additional help for noise management programs
`
`around U.S. military bases. A corporate partner commercialized the array research
`
`into a product. In another project, I helped co-develop a method for localizing
`
`sound using small arrays of unidirectional (e.g., “shot-gun”) microphones. The
`
`methods worked in both the time and frequency domains. Another military project
`
`5
`
`Page 14 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`funded by DoD involved the development of 2-D and 3-D source parametric arrays
`
`for steering heterodyned ultrasound for communications systems.
`
`14. Some of my professional activities include chairing an American
`
`National Standards Institute (ANSI) Committee to revise the ANSI S1.1 Acoustical
`
`Terminology Standard. I am also a Fellow in the American Society of Mechanical
`
`Engineers (ASME) and a former Chair of the Noise Control and Acoustics
`
`Division of ASME. I also chaired the Per Bruel Gold Medal in Acoustics Award
`
`selection committee for ASME. I have organized nine conference sessions on
`
`acoustics and was a Track Organizer (over multiple conference sessions) for nine
`
`ASME conferences, as well as Technical Program Chair over all acoustics-related
`
`conference sessions at the ASME International Mechanical Engineering Congress
`
`and Exposition (IMECE) in 2009. I also participated on a National Research
`
`Council (National Academies) panel to evaluate the hearing loss prevention
`
`component of the mining program for the National Institute for Occupational
`
`Safety and Health (NIOSH) research programs.
`
`15.
`
`I have published numerous technical papers, book chapters, reports,
`
`and the like related to acoustic sensors and acoustic signal processing.
`
`C. Compensation
`16.
`I am being compensated for services provided in this matter at my
`
`usual and customary rate of $400 per hour plus travel expenses. My compensation
`
`6
`
`Page 15 of 135
`
`
`
`is not conditioned on the conclusions I reach as a result of my analysis or on the
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`outcome of this matter, and in no way affects the substance of my statements in
`
`this declaration.
`
`17.
`
`I am not aware of any financial interest that I have in the Patent
`
`Owner, or any of its subsidiaries or affiliates. Likewise, I am not aware of any
`
`financial interest that I have in Petitioner, or any of its subsidiaries or affiliates. I
`
`do not have any financial interest in the ’080 patent or any proceeding involving
`
`the ’080 patent.
`
`IV. MATERIALS CONSIDERED
`18.
`In forming my opinions, I have analyzed the following, including the
`
`’080 patent, its file history, the prior art listed in this declaration and in the petition
`
`grounds, and the materials listed in this declaration.
`
`Exhibit
`
`Description
`
`Ex. 1001
`
`U.S. Patent No. 10,779,080 to Burnett (“the ’080 patent”)
`
`Ex. 1002
`
`File History of U.S. Patent No. 10,779,080
`
`Ex. 1004
`
`Curriculum Vitae of Jeffrey S. Vipperman, Ph.D.
`
`Ex. 1005
`
`Ex. 1006
`
`Japanese Unexamined Patent Application Publication No. H11-
`18186A to Ikeda et al. (“Ikeda”)
`
`Certified Translation of Japanese Unexamined Patent Application
`Publication No. H11-18186A to Ikeda et al. (“Ikeda”)
`
`7
`
`Page 16 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`Iain A. McCowan et al., Near-Field Adaptive Beamformer for
`Robust Speech Recognition, Digital Signal Processing, Vol. 12,
`Issue 1 (2002), 87-106 (“McCowan”)
`
`U.S. Patent Application Publication No. 2004/0185804 to
`Kanamori et al. (“Kanamori”)
`
`U.S. Patent Application Publication No. 2002/0193130 to Yang et
`al. (“Yang”)
`
`Ex. 1007
`
`Ex. 1008
`
`Ex. 1009
`
`Ex. 1010
`
`U.S. Patent No. 5,471,538 to Sasaki et al. (“Sasaki”)
`
`Ex. 1011
`
`Ex. 1012
`
`Ex. 1013
`
`Ex. 1014
`
`U.S. Patent Application Publication No. 2007/0244698 to Dugger
`et al. (“Dugger”)
`
`U.S. Patent Application Publication No. 2008/0152167 to Taenzer
`(“Taenzer”)
`
`U.S. Patent Application Publication No. 2006/0120537 to Burnett
`et al. (“Burnett”)
`
`U.S. Patent Application Publication No. 2003/0031328 to Elko et
`al. (“Elko”)
`
`Ex. 1015
`
`U.S. Patent No. 6,370,401 to Baranowski et al. (“Baranowski”)
`
`Ex. 1016
`
`U.S. Patent No. 6,006,115 to Wingate (“Wingate”)
`
`Ex. 1017
`
`U.S. Patent No. 5,590,417 to Rydbeck (“Rydbeck”)
`
`Ex. 1018
`
`Ex. 1019
`
`Ex. 1020
`
`U.S. Patent Application Publication No. 2005/0259811 to Kimm et
`al. (“Kimm”)
`
`U.S. Patent Application Publication No. 2003/0179888 to Burnett
`et al. (“Burnett II”)
`
`Excerpt of Bernard Widrow & Samuel D. Stearns, Adaptive Signal
`Processing, Prentice-Hall (1985) (“Widrow”)
`
`8
`
`Page 17 of 135
`
`
`
`Ex. 1023
`
`Ex. 1025
`
`Ex. 1026
`
`Ex. 1027
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`Patent Owner’s Claim Chart for U.S. Patent No. 10,779,080,
`Jawbone Innovations, LLC v. Google LLC, No. 6:21-cv-00985
`(W.D. Tex.)
`
`Excerpt of Lawrence E. Kinsler et al., Fundamentals of Acoustics,
`John Wiley & Sons, Inc. (4th ed. 2000)
`
`Excerpt of M. P. Norton et al., Fundamentals of Noise and
`Vibration Analysis for Engineers, Cambridge Univ. Press (2nd ed.
`2003)
`
`Gary W. Elko et al., A Simple Adaptive First-Order Differential
`Microphone, Proceedings of 1995 Workshop on Applications of
`Signal Processing to Audio and Accoustics, IEEE (1995)
`
`Ex. 1028
`
`Excerpt of Alan V. Oppenheim et al., Discrete-Time Signal
`Processing, Prentice-Hall, Inc. (2nd ed. 1999)
`
`
`
`19. My opinions are based on my experience, knowledge of the relevant
`
`art, the documents identified above, and the documents discussed in this
`
`declaration.
`
`V. LEGAL STANDARDS
`20.
`I am not a lawyer. My understanding of the legal standards to apply in
`
`reaching the conclusions in this declaration is based on discussions with counsel
`
`for Petitioner, my experience applying similar standards in other patent-related
`
`matters, and my reading of the documents submitted in this proceeding. In
`
`preparing this declaration, I sought to faithfully apply these legal standards to the
`
`challenged claims.
`
`9
`
`Page 18 of 135
`
`
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`A. Claim Construction
`21.
`I have been instructed that the terms appearing in the ’080 patent
`
`should be interpreted in view of the claim language itself, the specification, the
`
`prosecution history of the patent, and any relevant extrinsic evidence. The words of
`
`a claim are generally given their ordinary and customary meaning, which is the
`
`meaning that the term would have to a person of ordinary skill in the art at the time
`
`of the invention, which I am assuming here is June 13, 2007. While claim
`
`limitations cannot be read in from the specification, the specification is the single
`
`best guide to the meaning of a disputed term. I have followed these principles in
`
`reviewing the claims of the ’080 patent and forming the opinions set forth in this
`
`declaration.
`
`B.
`22.
`
`Level of Ordinary Skill
`I understand a person of ordinary skill in the art is determined by
`
`looking at (i) the type of problems encountered in the art; (ii) prior art solutions to
`
`those problems; (iii) rapidity with which innovations are made; (iv) sophistication
`
`of the technology; and (v) educational level of active workers in the field.
`
`23.
`
`In my opinion, a person of ordinary skill in the art (“POSITA”) would
`
`have had a minimum of a bachelor’s degree in computer engineering, computer
`
`science, electrical engineering, mechanical engineering, or a similar field, and
`
`approximately three years of industry or academic experience in a field related to
`
`10
`
`Page 19 of 135
`
`
`
`acoustics, speech recognition, speech detection, or signal processing. Work
`
`U.S. Patent No. 10,779,080
`Declaration of Jeffrey S. Vipperman, Ph.D.
`
`
`experience can substitute for formal education and additional formal education can
`
`substitute for work experience. I was at least a POSITA as of June 13, 2007.
`
`C. Obviousness
`24.
`I have been told that under 35 U.S.C. § 103, a patent claim may be
`
`obvious if the differences between the subject matter sought to be patented and the
`
`prior art are such that the subject matter as a whole would have been obvious at the
`
`time the invention was made to a person having ordinary skill in the art to which
`
`said subject matter pertains.
`
`25.
`
`I have been told that a proper obviousness analysis requires the
`
`following: (a) determining the scope and content of the prior art; (b) ascertaining
`
`the differences between the prior art and the claims at issue; (c) resolving the level
`
`of ordinary skill in the pertinent art; and (d) considering evidence of secondary
`
`indicia of non-obviousness (if available).
`
`26.
`
`I have been told that the relevant time for considering whether a claim
`
`would have been obvious to a person of ordinary skill in the art is the time of
`
`in