`
`Exhibit 1
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 2 of 89 PageID# 378
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`__________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`__________________________________
`
`AMAZON.COM, INC.,
`AMAZON.COM SERVICES LLC, and
`AMAZON WEB SERVICES, INC.,
`Petitioners,
`
`v.
`
`SOUNDCLEAR TECHNOLOGIES LLC,
`Patent Owner.
`
`Case No. IPR2025-00565
`Patent No. 11,069,337
`
`PETITION FOR INTER PARTES REVIEW
`OF CLAIMS 1-5 OF U.S. PATENT NO. 11,069,337
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 3 of 89 PageID# 379
`
`TABLE OF CONTENTS
`
`INTRODUCTION ------------------------------------------------------------------ 1
`I.
`BACKGROUND ------------------------------------------------------------------- 2
`II.
`III. THE ’337 PATENT ---------------------------------------------------------------- 3
`A. Overview --------------------------------------------------------------------- 3
`B.
`Prosecution History --------------------------------------------------------- 5
`IV. RELIEF REQUESTED ------------------------------------------------------------ 6
`A. Grounds ---------------------------------------------------------------------- 6
`B.
`The References Are Analogous Prior Art -------------------------------- 6
`LEVEL OF ORDINARY SKILL ------------------------------------------------- 7
`V.
`VI. CLAIM CONSTRUCTION ------------------------------------------------------- 8
`VII. GROUND 1: CLAIMS 1-5 ARE ANTICIPATED BY SHIN ---------------- 8
`A.
`Claim 1 ----------------------------------------------------------------------- 8
`1.
`1[pre]: Voice-Content Control Device --------------------------- 8
`
`2.
`
`3.
`
`4.
`
`5.
`
`6.
`
`7.
`
`8.
`
`9.
`
`1[a]: Proximity Sensor ---------------------------------------------- 9
`
`1[b]: Voice Classifying Unit -------------------------------------- 11
`
`1[c]: Process Executing Unit -------------------------------------- 15
`
`1[d]: Voice-Content Generating Unit ---------------------------- 16
`
`1[e]: Output Controller -------------------------------------------- 18
`
`1[f]: Generate a First Output Sentence -------------------------- 18
`
`1[g]: Generate a Second Output Sentence ---------------------- 20
`
`1[h]: Output Controller Adjusts Volume of Voice
`Data ------------------------------------------------------------------ 22
`
`-i-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 4 of 89 PageID# 380
`
`B.
`
`Claim 2 ---------------------------------------------------------------------- 24
`1.
`2[a]: Process Executing Unit -------------------------------------- 24
`
`2.
`
`2[b]: Voice-Content Generating Unit ---------------------------- 27
`
`Claim 3 ---------------------------------------------------------------------- 27
`C.
`Claims 4 and 5 -------------------------------------------------------------- 29
`D.
`VIII. GROUND 2: CLAIMS 1-5 WOULD HAVE BEEN OBVIOUS
`OVER SHIN ------------------------------------------------------------------------ 30
`A.
`Claim 1 ---------------------------------------------------------------------- 30
`B.
`Claim 2 ---------------------------------------------------------------------- 32
`C.
`Claim 3 ---------------------------------------------------------------------- 33
`D.
`Claims 4 and 5 -------------------------------------------------------------- 35
`IX. GROUND 3: CLAIMS 1-5 WOULD HAVE BEEN OBVIOUS
`OVER SHIMOMURA ------------------------------------------------------------ 35
`A.
`Claim 1 ---------------------------------------------------------------------- 35
`1.
`1[pre]: Voice-Content Control Device -------------------------- 35
`
`2.
`
`3.
`
`4.
`
`5.
`
`6.
`
`7.
`
`8.
`
`9.
`
`1[a]: Proximity Sensor --------------------------------------------- 37
`
`1[b]: Voice Classifying Unit -------------------------------------- 37
`
`1[c]: Process Executing Unit -------------------------------------- 40
`
`1[d]: Voice-Content Generating Unit ---------------------------- 41
`
`1[e]: Output Controller -------------------------------------------- 42
`
`1[f]: Generate a First Output Sentence -------------------------- 43
`
`1[g]: Generate a Second Output Sentence ---------------------- 44
`
`1[h]: Output Controller Adjusts Volume of Voice
`Data ------------------------------------------------------------------ 46
`
`-ii-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 5 of 89 PageID# 381
`
`B.
`
`Claim 2 ---------------------------------------------------------------------- 53
`1.
`2[a]: Process Executing Unit -------------------------------------- 53
`
`2.
`
`2[b]: Voice-Content Generating Unit ---------------------------- 56
`
`Claim 3 ---------------------------------------------------------------------- 56
`C.
`Claims 4 and 5 -------------------------------------------------------------- 59
`D.
`X. GROUND 4: CLAIMS 1-5 WOULD HAVE BEEN OBVIOUS
`OVER SHIMOMURA AND SHIN --------------------------------------------- 59
`A.
`Claim 1 ---------------------------------------------------------------------- 59
`B.
`Claim 2 ---------------------------------------------------------------------- 62
`C.
`Claim 3 ---------------------------------------------------------------------- 62
`D.
`Claims 4 and 5 -------------------------------------------------------------- 64
`XI. GROUND 5: CLAIMS 1-5 WOULD HAVE BEEN OBVIOUS
`OVER SHIN AND/OR SHIMOMURA IN VIEW OF
`KRISTJANSSON ------------------------------------------------------------------ 65
`XII. SECONDARY CONSIDERATIONS OF NONOBVIOUSNESS ---------- 68
`XIII. DISCRETIONARY DENIAL UNDER §314(A) IS NOT
`APPROPRIATE ------------------------------------------------------------------- 68
`A.
`Petitioner’s Sotera Stipulation-------------------------------------------- 69
`B.
`The Petition Presents Compelling Evidence of
`Unpatentability ------------------------------------------------------------- 69
`XIV. DISCRETIONARY DENIAL UNDER §325(D) IS NOT
`APPROPRIATE ------------------------------------------------------------------- 70
`XV. MANDATORY NOTICES, GROUNDS FOR STANDING, AND
`FEE PAYMENT ------------------------------------------------------------------- 72
`A.
`Real Party-In-Interest (37 C.F.R. §42.8(b)(1)) ------------------------- 72
`B.
`Related Matters (37 C.F.R. §42.8(b)(2)) -------------------------------- 72
`
`-iii-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 6 of 89 PageID# 382
`
`C.
`D.
`E.
`F.
`
`Lead and Backup Counsel (37 C.F.R. §42.8(b)(3)) ------------------- 73
`Service Information (37 C.F.R. §42.8(b)(4)) --------------------------- 74
`Grounds for Standing (37 C.F.R. §42.104) ----------------------------- 74
`Payment of Fees (37 C.F.R. §42.15(a)) --------------------------------- 74
`
`
`
`
`
`-iv-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 7 of 89 PageID# 383
`
`
`
`Cases:
`
`TABLE OF AUTHORITIES
`
`Page(s):
`
`Advanced Bionics, LLC v. MED-EL Elektromedizinische
`Geräte GmbH,
`IPR2019-01469, Paper 6 (P.T.A.B. Feb. 13, 2020) ----------------------- 70, 72
`Apple Inc. v. Telefonaktiebolaget LM Ericsson,
`IPR2022-00457, Paper 7 (P.T.A.B. Sept. 21, 2022) --------------------------- 72
`Endymed Med. Ltd. v. Serendia, LLC,
`IPR2024-00843, Paper 14 (P.T.A.B. Jan. 10, 2025) --------------------------- 69
`In re GPAC Inc.,
`57 F.3d 1573 (Fed. Cir. 1995) ------------------------------------------------------ 7
`JUUL Labs, Inc. v. NJOY, LLC,
`IPR2024-00160, Paper 10 (P.T.A.B. May 24, 2024) -------------------------- 72
`KSR Int’l Co. v. Teleflex Inc.,
`550 U.S. 398 (2007) ----------------------------------------------------------- passim
`Leapfrog Enters. v. Fisher-Price, Inc.,
`485 F.3d 1157 (Fed. Cir. 2007) --------------------------------------------------- 68
`Newell Cos. v. Kenney Mfg. Co.,
`864 F.2d 757 (Fed. Cir. 1988) ----------------------------------------------------- 68
`Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co. Ltd.,
`868 F.3d 1013 (Fed. Cir. 2017) ---------------------------------------------------- 8
`Quasar Sci. LLC v. Colt Int’l Clothing, Inc.,
`IPR2023-00611, Paper 10 (P.T.A.B. Oct. 10, 2023) ---------------------- 71, 72
`Samsung Elecs. Co. v. Maxell, Ltd.,
`IPR2024-00867, Paper 9 (P.T.A.B. Nov. 7, 2024) ----------------------------- 71
`Shenzen Chic Elecs. v. Pilot, Inc.,
`IPR2023-00810, Paper 12 (P.T.A.B. Nov. 8, 2023)---------------------------- 70
`Sotera Wireless, Inc. v. Masimo Corp.,
`IPR2020-01019, Paper 12 (P.T.A.B. Dec. 1, 2020) ---------------------------- 69
`
`-v-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 8 of 89 PageID# 384
`
`
`
`TP-Link Corp. Ltd. v. Netgear, Inc.,
`IPR2023-01469, Paper 10 (P.T.A.B. Apr. 2, 2024) ---------------------------- 70
`Unwired Planet, LLC v. Google Inc.,
`841 F.3d 995 (Fed. Cir. 2016) ------------------------------------------------------ 7
`Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc.,
`200 F.3d 795 (Fed. Cir. 1999) ------------------------------------------------------ 8
`Statutes and Rules:
`35 U.S.C. § 102 ----------------------------------------------------------------------- 6, 7
`35 U.S.C. § 103 -------------------------------------------------------------------------- 6
`35 U.S.C. § 314 -------------------------------------------------------------------- 68, 69
`35 U.S.C. § 325 -------------------------------------------------------------------- 70, 72
`Miscellaneous:
`Katherine K. Vidal, Interim Procedure for Discretionary Denials in
`AIA Post-Grant Proceedings with Parallel District Court
`Litigation (June 21, 2022) --------------------------------------------------------- 69
`
`
`
`
`
`-vi-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 9 of 89 PageID# 385
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`LIST OF EXHIBITS
`
`No.
`1001 U.S. Patent No. 11,069,337
`1002 Declaration of Richard Stern, Ph.D.
`
`Description
`
`1003 U.S. Patent App. Publ. No. 2017/0083281 (“Shin”)
`
`1004 English Translation of Shimomura from Ex. 1005
`1005 Declaration of Gwen Snorteland for Translation of Japanese Unex-
`amined Patent App. Publ. 2005/202076 (“Shimomura”)
`1006 Curriculum Vitae of Richard Stern, Ph.D.
`
`1007 Excerpts from File History of U.S. Patent No. 11,069,337
`1008 Order (Dkt. No. 63), SoundClear Techs., LLC v. Amazon.com, Inc., No.
`1:24-cv-01283-AJT-WBP (E.D. Va. Nov. 8, 2024)
`1009 U.S. Patent App. Publ. No. 2017/0154626 (“Kim”)
`
`1010 U.S. Patent App. Publ. No. 2017/0337921 (“Aoyama”)
`
`1011 U.S. Patent App. Publ. No. 2016/0284351 (“Ha”)
`1012 U.S. Patent No. 9,489,172 (“Iyer”)
`
`1013 U.S. Patent No. 10,147,439 (“Kristjansson”)
`1014 Order (Dkt. No. 84), SoundClear Techs., LLC v. Amazon.com, Inc., No.
`1:24-cv-01283-AJT-WBP (E.D. Va. Jan. 10, 2025)
`Nicolae Duta, Natural Language Understanding and Prediction: From
`Formal Grammars to Large Scale Machine Learning, 131 Fundamenta
`Informaticae 425 (2014) (“Duta”)
`
`1015
`
`
`
`
`Exhibit List, Page 1
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 10 of 89 PageID# 386
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`Amazon.com, Inc., Amazon.com Services LLC, and Amazon Web Services,
`
`Inc. (collectively, “Petitioner” or “Amazon”) request inter partes review of claims
`
`1-5 of U.S. Patent No. 11,069,337 (“the ’337 patent”), which SoundClear Technolo-
`
`gies LLC (“Patent Owner” or “PO”) purportedly owns.
`
`I.
`
`INTRODUCTION
`The ’337 patent describes an electronic device, such as a speaker or phone,
`
`that receives and processes a user’s voice and outputs a voice response. (Ex. 1001
`
`(’337 patent), Fig. 1, 2:62-3:1.) The patent claims priority to 2018, years after Apple
`
`launched Siri (2011), Amazon launched Alexa-enabled devices (2014), and Google
`
`launched its voice assistant (2016). The patent admits that devices that detect voice,
`
`perform processing according to the user’s intent, and provide a voice output were
`
`known. (Id., 1:21-28.)
`
`The Examiner allowed the ’337 patent claims because they recite calculating
`
`the distance between the device and user, then adjusting the content and volume of
`
`the response based on that distance. (Ex. 1007, 28-32, 41-46.) But this was not new
`
`in 2018. It had been disclosed in many prior art references, including the references
`
`relied on herein.
`
`Because the ’337 patent removes from the public store of knowledge devices
`
`that were known and obvious to those skilled in the art, the Board should cancel the
`
`claims.
`
`-1-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 11 of 89 PageID# 387
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`II. BACKGROUND
`Using a proximity sensor to determine a user’s distance from a device and
`
`then tailoring the information provided and the volume based on that distance has
`
`been known for decades. (Ex. 1002 ¶¶35-44.) In 2005, Shimomura (filed by Sony)
`
`described such a device. (Ex. 1004, Abstract.) It used cameras to determine the
`
`distance between the device and the user (id. ¶¶[0039], [0093]), classified the user’s
`
`speech based on distance (e.g., whether the user was more or less than 350 cm away)
`
`and adjusted the output content (e.g., including or omitting words) and volume ac-
`
`cordingly (id. ¶¶[0049]-[0051], [0057], [0086]-[0088]). (Ex. 1002 ¶41.)
`
`In 2017, Shin (filed by Samsung) disclosed a device that acquires a user’s
`
`voice input via a microphone, analyzes it, and provides voice output via a speaker.
`
`(Ex. 1003 ¶¶[0031], [0037].) Shin used a “distance detection module” or “proximity
`
`sensor” to “compute a distance between a user and the electronic device[.]” (Id.
`
`¶¶[0066], [0153], [0053], [0075], [0077], Figs. 5A, 9; Ex. 1002 ¶42.) Shin’s device
`
`classified voice input based on distance and then tailored the output. For users within
`
`1 meter, the device output “detailed content” at a lower volume (e.g., 40 dB). (Ex.
`
`1003 ¶¶[0051], [0088], Tables 1, 3.) For users more than 1 meter away, it output
`
`“abbreviated content” at a higher volume (e.g., 45 dB or more). (Id.; Ex. 1002 ¶43.)
`
`For example, if a user said, “Let me know today’s weather,” Shin’s device
`
`obtained weather-related information and may respond with: “The weather in Jul. 1,
`
`-2-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 12 of 89 PageID# 388
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`(e.g., less than 1 meter away), all five categories of information were provided. (Id.)
`
`If the voice was classified as farther away (e.g., between 1 and 2 meters), category
`
`2015 is (①) rainy after cloudy (②). The highest temperature is 28° C., and the
`lowest temperature is 18° C. (③), the rainfall is 10 mm (④). Prepare your umbrella
`when you go out. (⑤).” (Ex. 1003 ¶[0088].) If the voice was classified as near
`⑤ was omitted. (Id.) And, if the voice was classified as even farther away (e.g.,
`greater than 4 meters), all but category ② was omitted. (Id.) Thus, Shin disclosed
`
`using a proximity sensor to calculate user-to-device distance, classifying the user’s
`
`voice based on that distance, and tailoring the response’s content and volume based
`
`on the classification. This allowed Shin’s device to provide a “suitable” amount of
`
`information at a “suitable” volume based on distance. (Id. ¶¶[0089], [0080]; Ex.
`
`1002 ¶44.)
`
`Other references also disclosed tailoring output responses and/or volume
`
`based on the user-to-device distance. (Ex. 1002 ¶¶36-40; Exs. 1011, 1012, 1013.)
`
`III. THE ’337 PATENT
`A. Overview
`
`The ’337 patent describes a “voice-content control device” that analyzes a
`
`user’s voice and generates a response. (Ex. 1001, Abstract.) Figure 1 shows such a
`
`device 1 (orange) that detects the voice V1 of a user H (purple), processes it, and
`
`outputs a responsive voice V2:
`
`-3-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 13 of 89 PageID# 389
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`(Id., Fig. 1, 2:62-3:1; Ex. 1002 ¶45.)1 The device 1 includes various components,
`
`such as a voice detecting unit 10 (red) and a controller 16 (blue):
`
`
`
`
`
`
`1 Figures and Tables herein may be colored or otherwise annotated for clarity.
`
`-4-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 14 of 89 PageID# 390
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`(Ex. 1001, Fig. 2, 3:12-17; Ex. 1002 ¶46.) The controller includes several “units”
`
`for acquiring the voice, analyzing it, processing the request, and generating a re-
`
`sponse. (Ex. 1001, 3:50-59.) For example, the process executing unit analyzes the
`
`user’s speech (e.g., “How’s the weather today?”) detected with the voice detecting
`
`unit (e.g., microphones) and obtains the requested information (e.g., weather). (Id.,
`
`4:12-14, 4:56-61, 7:17-27.)
`
`The device may include a “proximity sensor” for calculating the distance to
`
`the user. (Id., 8:20-26.) This generic sensor, described in a single sentence, purport-
`
`edly allows the device to classify a voice as either a first or second voice. (Id.) When
`
`the voice is classified as a first voice (e.g., nearer than threshold distance), the device
`
`generates a first sentence and outputs it at a first volume. (Id., 9:38-50, 13:62-65.)
`
`When the voice is classified as a second voice (e.g., farther than threshold distance),
`
`the device generates a second sentence that omits some information from the first
`
`sentence and outputs it at a different volume. (Id.; Ex. 1002 ¶47.)
`
`B.
`
`Prosecution History
`
`During prosecution, the applicant distinguished the prior art based on limita-
`
`tions that recite “a proximity sensor configured to calculate a distance between a
`
`user and the voice-content control device” and “a voice classifying unit configured
`
`… to classify the voice as either one of a first voice or a second voice based on the
`
`distance between the user and the voice-content control device.” (Ex. 1007, 41-46.)
`
`-5-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 15 of 89 PageID# 391
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`Apparently unaware that the prior art disclosed these limitations, the Examiner er-
`
`roneously allowed the claims. (Id., 31.)
`
`IV. RELIEF REQUESTED
`A. Grounds
`
`The Board should cancel the claims on the following Grounds:
`
`Ground Reference(s)
`
`1
`
`2
`
`3
`
`4
`
`Shin
`
`Shin
`
`Shimomura
`
`Shimomura and Shin
`
`Basis
`
`§102
`
`§103
`
`§103
`
`§103
`
`Challenged
`Claims
`1-5
`
`1-5
`
`1-5
`
`1-5
`
`Shin and/or Shimomura in view of
`Kristjansson
`
`5
`
`This Petition is supported by the expert declaration of Richard Stern (Exs.
`
`§103
`
`1-5
`
`1002, 1006).
`
`B.
`
`The References Are Analogous Prior Art.
`
`The ’337 patent’s earliest possible priority date is March 6, 2018. (Ex. 1001.)
`
`Two references relied on herein are prior art under AIA §102(a)(1) and §102(a)(2):
`
`(1)
`
`(2)
`
`Shin published March 23, 2017 (Ex. 1003); and
`
`Shimomura published October 28, 2005 (Ex. 1004).
`
`-6-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 16 of 89 PageID# 392
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`Kristjansson is prior art under AIA §102(a)(2) because it is an issued U.S.
`
`patent that was effectively filed no later than March 30, 2017 and names different
`
`inventors than the ’337 patent (Ex. 1013).
`
`Shimomura published as a Japanese Patent Application. A certified English
`
`translation (Ex. 1004) is relied on herein. (See Ex. 1005.)
`
`Shin, Shimomura, and Kristjansson are analogous art because they are from
`
`the same field as the ’337 patent, e.g., controlling a device’s voice output. (Ex. 1001,
`
`1:29-64, 3:7-11); Unwired Planet, LLC v. Google Inc., 841 F.3d 995, 1000 (Fed.
`
`Cir. 2016). They are also pertinent to a problem the inventors were focused on, e.g.,
`
`adjusting voice output to improve user interaction. (Ex. 1002 ¶25.)
`
`V. LEVEL OF ORDINARY SKILL
`Based on the relevant factors, In re GPAC Inc., 57 F.3d 1573, 1579 (Fed. Cir.
`
`1995), a person of ordinary skill in the art (“POSITA”) would have had a minimum
`
`of a bachelor’s degree in computer engineering, computer science, electrical engi-
`
`neering, or a similar field, and approximately two years of industry or academic ex-
`
`perience in a field related to controlling the audio output of electronic devices. (Ex.
`
`1002 ¶¶30-34.) Work experience could substitute for formal education and addi-
`
`tional formal education could substitute for work experience. (Id.)
`
`-7-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 17 of 89 PageID# 393
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`VI. CLAIM CONSTRUCTION
`No claim terms require construction to resolve the invalidity challenges here.
`
`Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co. Ltd., 868 F.3d 1013, 1017
`
`(Fed. Cir. 2017); Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803
`
`(Fed. Cir. 1999). For purposes of this proceeding only, Petitioner assumes the claims
`
`are not invalid under §112.
`
`VII. GROUND 1: CLAIMS 1-5 ARE ANTICIPATED BY SHIN.
`Shin anticipates claims 1-5. (Ex. 1002 ¶¶50-106.)
`
`A. Claim 1
`1[pre]: Voice-Content Control Device
`1.
`
`Shin discloses an electronic device that receives a user’s voice input through
`
`a microphone, “generate[s] content corresponding to a result of analyzing the voice
`
`input,” and “provide[s] the generated content as sound through an embedded audio
`
`output module (e.g., a speaker).” (Ex. 1003 ¶[0037]; see also id., Abstract, ¶¶[0003],
`
`[0008]-[0011], [0038]-[0039], [0048]-[0060], [0069]-[0072], claims 1, 15, 20.) For
`
`example, if a user says, “Let me know what time it is now,” Shin’s device may re-
`
`spond with, “The current time is nine ten AM.” (Id. ¶[0037].) The device may be a
`
`smartphone, PC, or home appliance. (Id. ¶[0031].) Figure 1A of Shin shows an
`
`example of device 100:
`
`-8-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 18 of 89 PageID# 394
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`(Id., Fig. 1A.) Thus, Shin discloses a “voice-content control device.” (Ex. 1002
`
`
`
`¶51.)
`
`2.
`
`1[a]: Proximity Sensor
`
`Claim element 1[a] recites “a proximity sensor configured to calculate a dis-
`
`tance between a user and the voice-content control device.” Shin discloses this. (Ex.
`
`1002 ¶¶52-58.)
`
`Shin’s device includes a “distance detection module” that “compute[s] a dis-
`
`tance between a user and the electronic device[.]” (Ex. 1003 ¶[0066]; see id.
`
`¶¶[0053] (“the processor 120 may determine a distance between the user and the
`
`electronic device 101 based on … the distance computed, calculated, or measured
`
`by the distance detection module 180”), [0075], [0077], Fig. 5A; Ex. 1002 ¶53.) The
`
`distance detection module 180 may include “a depth camera like a time-of-flight
`
`(TOF) camera, a stereo camera computing depth information using triangulation, a
`
`charge coupled device (CCD) camera computing a distance through an image pro-
`
`cessing, or the like” and/or “various sensors,” such as “an infra-red sensor, an RF
`
`sensor, an ultrasonic sensor, and the like.” (Ex. 1003 ¶[0066]; see also id. ¶[0042];
`
`-9-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 19 of 89 PageID# 395
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`Ex. 1002 ¶54.) Shin’s Figure 3 shows the electronic device 101 comprising the dis-
`
`tance detection module 180 (green):
`
`(Ex. 1003, Fig. 3, ¶[0045]; id. ¶¶[0053]-[0054], [0077], [0096], claims 5, 9; Ex. 1002
`
`
`
`¶¶55-56.)2
`
`Shin also discloses that the electronic device 101 may use “a proximity sen-
`
`sor.” (Ex. 1003 ¶[0153], Fig. 9 (“proximity sensor” 940G in “sensor module” 940);
`
`see also id. ¶[0143] (device 901 may be included in devices 100, 101); Ex. 1002
`
`¶57.)
`
`
`2 Shin’s electronic device 100 may be implemented with the modules of elec-
`tronic device 101. (Ex. 1003 ¶¶[0041], [0043].)
`
`-10-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 20 of 89 PageID# 396
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`Thus, Shin discloses a proximity sensor configured to calculate a distance be-
`
`tween a user and the voice-content control device. (Ex. 1002 ¶¶52-58.)
`
`3.
`
`1[b]: Voice Classifying Unit
`
`Claim element 1[b] recites “a voice classifying unit configured to analyze a
`
`voice spoken by a user and acquired by a voice acquiring unit to classify the voice
`
`as either one of a first voice or a second voice based on the distance between the user
`
`and the voice-content control device.” Shin discloses this. (Ex. 1002 ¶¶59-66.)
`
`The ’337 patent states that a voice classifying unit may be part of a controller,
`
`which performs processes by reading software/program stored in a storage. (Ex.
`
`1001, 3:50-59, Fig. 2.) This controller can be a central processing unit (CPU). (Id.)
`
`Thus, according to the ’337 patent, the “voice classifying unit” refers to a CPU con-
`
`figured to perform the claimed function, namely, to analyze the user’s voice and to
`
`classify it as a first or second voice. (Ex. 1002 ¶60.)
`
`The ’337 patent further explains that, when a proximity sensor is used for
`
`calculating the user-to-device distance, the distance can be used as a “feature value”
`
`to perform the classification as the first or second voice. (Ex. 1001, 8:20-26.) Spe-
`
`cifically, the unit “sets a threshold of the feature value, and classifies the voice” as
`
`the first or second voice “based on whether the feature value exceeds the threshold.”
`
`(Id., 8:30-34.) Shin discloses the same thing. (Ex. 1002 ¶61.)
`
`-11-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 21 of 89 PageID# 397
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`First, Shin discloses a voice classifying unit that “analyze[s] a voice spoken
`
`by a user and acquired by a voice acquiring unit,” as recited. Specifically, Shin
`
`discloses that the device 101 (orange) includes a processor 120 (blue) and an audio
`
`input module 151 (red):
`
`
`
`(Ex. 1003, Fig. 3, ¶[0045]; Ex. 1002 ¶62.) The processor 120, which may be a CPU
`
`(Ex. 1003 ¶[0029]), “analyze[s] a voice input received through [the] audio input
`
`module 151.” (Id. ¶[0048]; see also id., Abstract, ¶¶[0003]-[0011], [0037]-[0039],
`
`[0041], [0060]-[0061], claims 1, 15, 20.) The audio input module 151 can be “im-
`
`plemented with a microphone and the like” to “obtain a user’s speech as a voice
`
`input.” (Id. ¶[0062]; see also id. ¶¶[0037], [0155], [0158].) Thus, Shin discloses a
`
`-12-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 22 of 89 PageID# 398
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`voice classifying unit (processor) configured to analyze a voice spoken by a user and
`
`acquired by a voice acquiring unit (e.g., audio input module and/or portion of pro-
`
`cessor that receives voice from the audio input module). (Ex. 1002 ¶62.)
`
`Second, Shin discloses that the voice classifying unit (processor) analyzes the
`
`voice to classify it as either a first or second voice based on the user-to-device dis-
`
`tance. (Id. ¶63.) Shin’s processor executes a voice recognition application to pro-
`
`cess the voice input and generates corresponding content for output. (Ex. 1003
`
`¶¶[0047], [0060]; see also id. ¶¶[0037], [0039].) The processor determines the “out-
`
`put scheme,” which includes the content and its volume level, “based on the distance
`
`between the user and the electronic device 101[.]” (Id. ¶¶[0078], [0086]; see also
`
`id. ¶¶[0053], [0077]-[0080], claims 6, 17.) To do so, Shin’s processor classifies the
`
`voice based on the distance. (Ex. 1002 ¶63.)
`
`An example is shown in Shin’s Table 3, which discloses providing a different
`
`amount of information based on the user-to-device distance:
`
`(Ex. 1003, Table 3 (consolidated).)
`
`-13-
`
`
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 23 of 89 PageID# 399
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`Thus, for example, Shin discloses classifying the voice as a “first voice” when
`
`the distance is less than 1 meter, and a “second voice” when the distance is greater
`
`than 4 meters:
`
`
`
`(Id.; see also id., Table 1, ¶¶[0051] (content may be “classified dichotomously”),
`
`[0079]-[0081], [0086]-[0087]; Ex. 1002 ¶64.) Any of the distance ranges up to 4
`
`meters would satisfy the claimed “first voice.” (Ex. 1002 ¶65.) For example, a voice
`
`between 3 and 4 meters away could be considered the recited “first voice.” (Id.)
`
`Alternatively, all of the distances less than 4 meters could collectively be considered
`
`a first voice (e.g., a voice less than 4 meters away). (Id.)
`
`Thus, Shin discloses that the voice classifying unit (processor) is configured
`
`to analyze the voice to classify the voice as either one of a first voice (e.g., closer
`
`than 1 meter) or a second voice (e.g., farther than 4 meters) based on the distance
`
`between the user and the device. (Id. ¶¶59-66.)
`
`-14-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 24 of 89 PageID# 400
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`4.
`
`1[c]: Process Executing Unit
`
`Claim element 1[c] recites “a process executing unit configured to analyze the
`
`voice acquired by the voice acquiring unit to execute processing required by the
`
`user.” The ’337 patent explains that, like the voice classifying unit, a process exe-
`
`cuting unit may be part of a controller that performs processes by reading soft-
`
`ware/program stored in a storage. (Ex. 1001, 3:50-59, Fig. 2.) In the ’337 patent,
`
`the “process executing unit” analyzes the user’s speech (e.g., “How’s the weather
`
`today?”) and obtains the requested information. (Id., 4:12-14, 4:56-61, 7:17-27.)
`
`The patent admits that devices that performed these functions were known (id., 1:21-
`
`28; Ex. 1002 ¶67) and, consequently, this limitation cannot make the claim patenta-
`
`ble. Regardless, Shin discloses this claim element. (Ex. 1002 ¶¶67-70.)
`
`Shin’s processor analyzes the voice acquired by the voice acquiring unit (e.g.,
`
`the “audio input module” and/or portion of processor that receives voice from the
`
`audio input module). (Ex. 1003 ¶¶[0048], [0062].) Shin’s processor executes a
`
`voice recognition application to process the voice input (which requests content in
`
`which the user is interested) and generates corresponding content for output by ana-
`
`lyzing the request. (Supra §VII.A.3; Ex. 1003 ¶¶[0048], [0058]-[0060], [0084],
`
`[0089]; Ex. 1002 ¶68; see also Ex. 1003 ¶¶[0037], [0058], [0067], [0071], [0088],
`
`[0103], [0107], [0120], [0126]-[0127], [0141].)
`
`-15-
`
`
`
`Case 3:24-cv-00540-MHL Document 49-1 Filed 04/08/25 Page 25 of 89 PageID# 401
`IPR Petition – Patent 11,069,337
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`Indeed, Shin discloses analyzing the acquired voice “to execute processing
`
`required by the user” in the same way as the ’337 patent. (Ex. 1002 ¶69.) For ex-
`
`ample, Shin describes analyzing a user’s speech (“Let me know today’s weather”)
`
`to execute processing to obtain the required weather information. (Id.; Ex. 1003
`
`¶[0088].)
`
`Thus, Shin discloses a process executing unit (processor) configured to ana-
`
`lyze the voice acquired by the voice acquiring unit to execute processing required
`
`by the user. (Ex. 1002 ¶¶67-70.)
`
`5.
`
`1[d]: Voice-Content Generating Unit
`
`Claim element 1[d] recites “a voice-content generating unit configured to gen-
`
`erate, based on content of the processing executed by the process executing unit,
`
`output sentence that is text data for a voice to be output to the user.” In the ’337
`
`patent, the “voice-content generating unit” is the portion of the processor (CPU) that
`
`performs the recited function. (Ex. 1001, 3:50-59, Fig. 2.) In the ’337 patent’s
`
`weather example, the “voice-cont