throbber
Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 1 of 90 PageID# 466
`
`Exhibit 2
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 2 of 90 PageID# 467
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`__________________________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`__________________________________
`
`AMAZON.COM, INC.,
`AMAZON.COM SERVICES LLC, and
`AMAZON WEB SERVICES, INC.,
`Petitioners,
`
`v.
`
`SOUNDCLEAR TECHNOLOGIES LLC,
`Patent Owner.
`
`Case No. IPR2025-00673
`Patent No. 11,244,675
`
`PETITION FOR INTER PARTES REVIEW
`OF CLAIMS 1-7 OF U.S. PATENT NO. 11,244,675
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 3 of 90 PageID# 468
`
`
`
`TABLE OF CONTENTS
`
`INTRODUCTION ------------------------------------------------------------------- 1
`I.
`BACKGROUND -------------------------------------------------------------------- 2
`II.
`III. THE ’675 PATENT ----------------------------------------------------------------- 3
`A. Overview ---------------------------------------------------------------------- 3
`Prosecution History ---------------------------------------------------------- 5
`B.
`IV. RELIEF REQUESTED ------------------------------------------------------------- 5
`A. Grounds ----------------------------------------------------------------------- 5
`B.
`Prior Art ----------------------------------------------------------------------- 6
`LEVEL OF ORDINARY SKILL -------------------------------------------------- 7
`V.
`VI. CLAIM CONSTRUCTION -------------------------------------------------------- 7
`VII. GROUND 1A: CLAIMS 1-7 WOULD HAVE BEEN OBVIOUS
`OVER SHIN AND AOYAMA ---------------------------------------------------- 7
`A.
`Claim 1 ------------------------------------------------------------------------ 7
`1.
`1[pre]: Output-Content Control Device --------------------------- 7
`2.
`1[a]: Voice Acquiring Unit ------------------------------------------ 8
`3.
`1[b]: Voice Classifying Unit --------------------------------------- 10
`a.
`1[b][i]: Calculates Distance -------------------------------- 10
`b.
`1[b][ii]: Classifies Voice Based on Distance ------------- 12
`1[c]-1[d]: Intention Analyzing Unit and Notification-
`Information Acquiring Unit ---------------------------------------- 15
`1[e]: Output-Content Generating Unit ---------------------------- 20
`1[f]: First Output Sentence ----------------------------------------- 22
`1[g]: Second Output Sentence ------------------------------------- 28
`
`5.
`6.
`7.
`
`4.
`
`-i-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 4 of 90 PageID# 469
`
`
`
`B.
`
`C.
`D.
`
`Claim 2 ----------------------------------------------------------------------- 30
`1.
`2[a]: Notification Information ------------------------------------- 30
`2.
`2[b]: Output-Content Generating Unit ---------------------------- 32
`Claim 3 ----------------------------------------------------------------------- 33
`Claim 4 ----------------------------------------------------------------------- 35
`1.
`4[a]: Relationship Information------------------------------------- 35
`2.
`4[b]: Output-Content Generating Unit ---------------------------- 36
`Claim 5 ----------------------------------------------------------------------- 37
`E.
`Claims 6-7 -------------------------------------------------------------------- 40
`F.
`VIII. GROUND 1B: CLAIMS 1-2 AND 5-7 WOULD HAVE BEEN
`OBVIOUS OVER SHIN AND IWASE ----------------------------------------- 41
`A.
`Claim 1 ----------------------------------------------------------------------- 41
`B.
`Claim 2 ----------------------------------------------------------------------- 44
`1.
`2[a]: Notification Information ------------------------------------- 44
`2.
`2[b]: Output-Content Generating Unit ---------------------------- 45
`Claims 5-7 -------------------------------------------------------------------- 46
`C.
`IX. GROUND 2A: CLAIMS 1-7 WOULD HAVE BEEN OBVIOUS
`OVER SHIMOMURA AND AOYAMA --------------------------------------- 46
`A.
`Claim 1 ----------------------------------------------------------------------- 46
`1.
`1[pre]: Output-Content Control Device -------------------------- 46
`2.
`1[a]: Voice Acquiring Unit ----------------------------------------- 48
`3.
`1[b]: Voice Classifying Unit --------------------------------------- 49
`a.
`1[b][i]: Calculates Distance -------------------------------- 49
`b.
`1[b][ii]: Classifies Voice Based on Distance ------------- 49
`
`-ii-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 5 of 90 PageID# 470
`
`
`
`4.
`
`1[c]-1[d]: Intention Analyzing Unit and Notification-
`Information Acquiring Unit ---------------------------------------- 51
`1[e]: Output-Content Generating Unit ---------------------------- 53
`5.
`1[f]: First Output Sentence ----------------------------------------- 54
`6.
`1[g]: Second Output Sentence ------------------------------------- 57
`7.
`Claims 2-4 -------------------------------------------------------------------- 58
`B.
`Claim 5 ----------------------------------------------------------------------- 58
`C.
`Claims 6-7 -------------------------------------------------------------------- 60
`D.
`X. GROUND 2B: CLAIMS 1-2 AND 5-7 WOULD HAVE BEEN
`OBVIOUS OVER SHIMOMURA AND IWASE ----------------------------- 60
`A.
`Claim 1 ----------------------------------------------------------------------- 60
`B.
`Claim 2 ----------------------------------------------------------------------- 62
`C.
`Claims 5-7 -------------------------------------------------------------------- 62
`XI. GROUND 3: CLAIMS 3-4 WOULD HAVE BEEN OBVIOUS
`OVER THE COMBINATIONS IN GROUNDS 1B OR 2B AND
`AOYAMA --------------------------------------------------------------------------- 63
`A.
`Claim 3 ----------------------------------------------------------------------- 63
`B.
`Claim 4 ----------------------------------------------------------------------- 64
`XII. GROUND 4: CLAIM 5 WOULD HAVE BEEN OBVIOUS
`OVER THE COMBINATIONS IN GROUNDS 1A-2B AND
`SCHUSTER ------------------------------------------------------------------------- 65
`XIII. GROUND 5: CLAIMS 1-7 WOULD HAVE BEEN OBVIOUS
`OVER THE COMBINATIONS IN GROUNDS 1A-4 AND
`KRISTJANSSON ------------------------------------------------------------------- 67
`XIV. SECONDARY CONSIDERATIONS OF NONOBVIOUSNESS ----------- 70
`XV. DISCRETIONARY DENIAL UNDER §314(A) IS NOT
`APPROPRIATE -------------------------------------------------------------------- 70
`
`-iii-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 6 of 90 PageID# 471
`
`
`
`A.
`Petitioner’s Sotera Stipulation--------------------------------------------- 71
`Compelling Evidence of Unpatentability -------------------------------- 71
`B.
`XVI. DISCRETIONARY DENIAL UNDER §325(D) IS NOT
`APPROPRIATE -------------------------------------------------------------------- 72
`XVII. MANDATORY NOTICES, GROUNDS FOR STANDING,
`AND FEE PAYMENT ------------------------------------------------------------- 73
`A.
`Real Party-In-Interest (37 C.F.R. §42.8(b)(1)) -------------------------- 73
`B.
`Related Matters (37 C.F.R. §42.8(b)(2)) --------------------------------- 73
`C.
`Lead and Backup Counsel (37 C.F.R. §42.8(b)(3)) -------------------- 74
`D.
`Service Information (37 C.F.R. §42.8(b)(4)) ---------------------------- 74
`E.
`Grounds for Standing (37 C.F.R. §42.104) ------------------------------ 75
`F.
`Payment of Fees (37 C.F.R. §42.15(a)) ---------------------------------- 75
`
`
`
`
`
`
`-iv-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 7 of 90 PageID# 472
`
`
`
`Cases:
`
`TABLE OF AUTHORITIES
`
`Page(s):
`
`Advanced Bionics, LLC v. MED-EL Elektromedizinische
`Geräte GmbH,
`IPR2019-01469, Paper 6 (P.T.A.B. Feb. 13, 2020) ----------------------- 72, 73
`Apple Inc. v. Telefonaktiebolaget LM Ericsson,
`IPR2022-00457, Paper 7 (P.T.A.B. Sept. 21, 2022) --------------------------- 73
`Endymed Med. Ltd. v. Serendia, LLC,
`IPR2024-00843, Paper 14 (P.T.A.B. Jan. 10, 2025) --------------------------- 71
`JUUL Labs, Inc. v. NJOY, LLC,
`IPR2024-00160, Paper 10 (P.T.A.B. May 24, 2024) -------------------------- 72
`KSR Int’l Co. v. Teleflex Inc.,
`550 U.S. 398 (2007) ----------------------------------------------------------- passim
`Leapfrog Enters. v. Fisher-Price, Inc.,
`485 F.3d 1157 (Fed. Cir. 2007) --------------------------------------------------- 70
`Newell Cos. v. Kenney Mfg. Co.,
`864 F.2d 757 (Fed. Cir. 1988) ----------------------------------------------------- 70
`Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co. Ltd.,
`868 F.3d 1013 (Fed. Cir. 2017) ---------------------------------------------------- 7
`Quasar Sci. LLC v. Colt Int’l Clothing, Inc.,
`IPR2023-00611, Paper 10 (P.T.A.B. Oct. 10, 2023) ---------------------- 72, 73
`Shenzen Chic Elecs. v. Pilot, Inc.,
`IPR2023-00810, Paper 12 (P.T.A.B. Nov. 8, 2023)---------------------------- 72
`Sotera Wireless, Inc. v. Masimo Corp.,
`IPR2020-01019, Paper 12 (P.T.A.B. Dec. 1, 2020) ---------------------------- 71
`TP-Link Corp. Ltd. v. Netgear, Inc.,
`IPR2023-01469, Paper 10 (P.T.A.B. Apr. 2, 2024) ---------------------------- 72
`Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc.,
`200 F.3d 795 (Fed. Cir. 1999) ------------------------------------------------------ 7
`
`-v-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 8 of 90 PageID# 473
`
`
`
`Statutes and Rules:
`35 U.S.C. §102 --------------------------------------------------------------------------- 6
`35 U.S.C. §103 --------------------------------------------------------------------------- 5
`35 U.S.C. §112 --------------------------------------------------------------------------- 7
`35 U.S.C. §314 --------------------------------------------------------------------- 70, 71
`35 U.S.C. §325 -------------------------------------------------------------------------- 72
`Miscellaneous:
`Katherine K. Vidal, Interim Procedure for Discretionary Denials
`in AIA Post-Grant Proceedings with Parallel District Court
`Litigation (June 21, 2022) --------------------------------------------------------- 71
`
`
`
`
`
`
`-vi-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 9 of 90 PageID# 474
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`Exhibit
`1001 U.S. Patent No. 11,244,675
`
`LIST OF EXHIBITS
`Description
`
`1002 Declaration of Richard Stern, Ph.D.
`1003 U.S. Patent App. Publ. No. 2017/0083281 (“Shin”)
`
`1004 English Translation of Shimomura from Ex. 1005
`
`1005 Declaration of Gwen Snorteland for Translation of Japanese Unex-
`amined Patent App. Publ. 2005/202076 (“Shimomura”)
`
`1006 Curriculum Vitae of Richard Stern, Ph.D.
`1007 Excerpts from File History of U.S. Patent No. 11,244,675
`
`1008 Order (Dkt. No. 63), SoundClear Techs., LLC v. Amazon.com, Inc., No.
`1:24-cv-01283-AJT-WBP (E.D. Va. Nov. 8, 2024)
`
`1009 U.S. Patent App. Publ. No. 2017/0154626 (“Kim”)
`1010 U.S. Patent App. Publ. No. 2017/0337921 (“Aoyama”)
`
`1011 U.S. Patent App. Publ. No. 2016/0284351 (“Ha”)
`
`1012 U.S. Patent No. 9,489,172 (“Iyer”)
`1013 U.S. Patent No. 10,147,439 (“Kristjansson”)
`
`1014 Order (Dkt. No. 84), SoundClear Techs., LLC v. Amazon.com, Inc., No.
`1:24-cv-01283-AJT-WBP (E.D. Va. Jan. 10, 2025)
`
`1015 Nicolae Duta, Natural Language Understanding and Prediction: From
`Formal Grammars to Large Scale Machine Learning, 131 Fundamenta
`Informaticae 425 (2014) (“Duta”)
`
`1016 U.S. Patent No. 9,680,983 (“Schuster”)
`1017 U.S. Patent No. 11,183,167 (“Iwase”)
`
`1018 U.S. Patent App. Publ. No. 2019/0103127 (“Tseretopoulos”)
`
`Exhibit List, Page 1
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 10 of 90 PageID# 475
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`Amazon.com, Inc., Amazon.com Services LLC, and Amazon Web Services,
`
`Inc. (collectively, “Petitioner” or “Amazon”) request inter partes review of claims
`
`1-7 of U.S. Patent No. 11,244,675 (“the ’675 patent”), which SoundClear Technolo-
`
`gies LLC (“Patent Owner” or “PO”) purportedly owns.
`
`I.
`
`INTRODUCTION
`The ’675 patent describes an electronic device (e.g., speaker or phone) that
`
`receives and processes a user’s voice input V1 and outputs a response voice V2:
`
`
`
`(Ex. 1001, Fig. 1, 2:64-3:4.) The patent claims priority to 2018, years after Apple
`
`(2011), Amazon (2014), and Google (2016) launched their voice assistants. The
`
`patent admits that devices that detect voice, perform processing according to the
`
`user’s intent, and provide voice output were known. (Id., 1:20-27.)
`
`The Examiner allowed the claims because they recite calculating the user-to-
`
`device distance and tailoring the response based on that distance. (Ex. 1007, 35-36.)
`
`But many prior art references disclosed this. Because the ’675 patent claims would
`
`have been obvious to those skilled in the art, the Board should cancel the claims.
`
`
`
`
`
`-1-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 11 of 90 PageID# 476
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`II. BACKGROUND
`Determining a user’s distance from a device and tailoring output based on that
`
`distance has been known for decades. (Ex. 1002 ¶¶33-43.) In 2005, Shimomura
`
`(filed by Sony) described doing so. (Ex. 1004, Abstract.) Shimomura used cameras
`
`to determine the user-to-device distance, classified the user’s speech based on dis-
`
`tance (e.g., more or less than 350cm away) and adjusted the output content accord-
`
`ingly (id. ¶¶[0039], [0085]).
`
`In 2017, Shin (filed by Samsung) disclosed a device that uses a “distance de-
`
`tection module” or “proximity sensor” to “compute a distance between a user and
`
`the electronic device[.]” (Ex. 1003 ¶¶[0066], [0153].) Shin’s device tailored the
`
`output based on that distance. For example, for users within 1 meter, the device
`
`output “detailed content”; for users more than 1 meter away, it output “abbreviated
`
`content.” (Id. ¶¶[0051], [0088], Table 3.)
`
`Tailoring output by replacing words was also known, as the Examiner
`
`acknowledged. (Ex. 1007, 35-36, 63-70, 98-104; see also Ex. 1002 ¶43 (citing Ex.
`
`1018).) In 2017, Aoyama (filed by Sony) disclosed customizing a device’s voice
`
`output by replacing words based on the user or situation. (Ex. 1010 ¶¶[0138]-[0140],
`
`[0177], [0218].) For example, the device could replace names such as “Taro
`
`Yamada” with “Mr. Yamada” in its output. (Id. ¶¶[0138]-[0140]; Ex. 1002 ¶41.)
`
`-2-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 12 of 90 PageID# 477
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`Iwase (filed by Sony) also described a mobile device that tailored voice output by
`
`replacing words. (Ex. 1017, 3:26-29, 5:10-13, 15:6-14, Abstract.)
`
`III. THE ’675 PATENT
`A. Overview
`
`The ’675 patent describes a device that analyzes a user’s voice and generates
`
`a response. (Ex. 1001, Abstract.) Figure 1 shows such a device 1 (orange) that
`
`detects the voice V1 of a user H (purple), processes it, and outputs a responsive voice
`
`V2:
`
`
`
`
`
`(Id., Fig. 1, 2:64-3:4; Ex. 1002 ¶44.)1 The device includes various components, such
`
`as a voice detecting unit 10 (red) and a controller 16 (blue):
`
`
`1 Figures and Tables herein may be colored or annotated for clarity.
`
`-3-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 13 of 90 PageID# 478
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`
`(Ex. 1001, Fig. 2, 3:17-22.) The controller includes “units” for acquiring the voice,
`
`analyzing it, processing the request, and generating a response. (Id., 3:55-67.)
`
`The device may include a “proximity sensor” for calculating the distance to
`
`the user. (Id., 10:32-38.) This generic sensor, described in a single sentence, pur-
`
`portedly allows the device to classify a voice as either a first voice or a second voice.
`
`(Id.) When the voice is classified as a first voice (e.g., beyond a threshold distance),
`
`the device may acquire information to generate a first sentence (e.g., “Meeting with
`
`Mr. Yoshida from 15 o’clock at Tokyo building”), replace a word (e.g., Mr. Yoshida)
`
`with another word (e.g., Yoshi), and output the sentence with the replacement word.
`
`-4-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 14 of 90 PageID# 479
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`(Id., 10:32-38, 10:42-46, 8:55-63, 11:53-12:55, 14:40-43, claims 1, 6-7.) When the
`
`voice is classified as the second voice (e.g., within a threshold distance), the device
`
`may generate and output a second sentence using the original word. (Id.; id., 11:5-
`
`52; Ex. 1002 ¶¶45-46.)
`
`B.
`
`Prosecution History
`
`During prosecution, the applicant distinguished the prior art based on limita-
`
`tions that recite calculating the user-to-device distance and classifying the voice
`
`based on distance. (Ex. 1007, 47, 49-50.) Apparently unaware that prior art dis-
`
`closed these limitations, the Examiner erroneously allowed the claims. (Id., 35-36.)
`
`IV. RELIEF REQUESTED
`A. Grounds
`
`The Board should cancel the claims under 35 U.S.C. §103 on the following
`
`Grounds:
`
`Ground Reference(s)
`
`1a
`1b
`2a
`2b
`3
`4
`5
`
`Shin and Aoyama
`Shin and Iwase
`Shimomura and Aoyama
`Shimomura and Iwase
`Grounds 1b or 2b and Aoyama
`Grounds 1a-2b and Schuster
`Grounds 1a-4 and Kristjansson
`
`-5-
`
`Challenged
`Claims
`1-7
`1-2, 5-7
`1-7
`1-2, 5-7
`3-4
`5
`1-7
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 15 of 90 PageID# 480
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`This Petition is supported by the expert declaration of Richard Stern (Exs.
`
`1002, 1006).
`
`B.
`
`Prior Art
`
`The ’675 patent’s earliest possible priority date is March 12, 2018. (Ex.
`
`1001.) The references relied on herein are prior art for the following reasons:
`
`Ex.
`
`Reference
`
`1003
`
`Shin
`
`1004
`
`Shimomura
`
`1010 Aoyama
`
`1016
`
`Schuster
`
`Date
`Filed: September 19, 2016
`Published: March 23, 2017
`Published: October 28, 2005
`Filed: November 26, 2015
`Published: November 23, 2017
`1013 Kristjansson Filed: March 30, 2017
`Filed: June 16, 2016
`Published: June 13, 2017
`Filed: December 25, 2017
`
`Art Type
`§102(a)(2)
`§102(a)(1)
`§102(a)(1)
`§102(a)(2)
`§102(a)(1)
`§102(a)(2)
`§102(a)(2)
`§102(a)(1)
`§102(a)(2)
`
`Iwase
`
`1017
`
`Shimomura published as a Japanese Patent Application. A certified English
`
`translation (Ex. 1004) is relied on herein. (See Ex. 1005.)
`
`The references above are analogous art because they are from the same field
`
`as the ’675 patent, e.g., controlling a device’s voice output. (Ex. 1001, 1:28-61, 3:10-
`
`16). They are also pertinent to a problem the inventors were focused on, e.g., ad-
`
`justing voice output to improve user interaction. (Ex. 1002 ¶23.)
`
`-6-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 16 of 90 PageID# 481
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`V. LEVEL OF ORDINARY SKILL
`A person of ordinary skill in the art (“POSITA”) would have had a minimum
`
`of a bachelor’s degree in computer engineering, computer science, electrical engi-
`
`neering, or a similar field, and approximately two years of industry or academic ex-
`
`perience in a field related to controlling the audio output of electronic devices. (Id.
`
`¶¶28-32.) Work experience could substitute for formal education and additional for-
`
`mal education could substitute for work experience. (Id.)
`
`VI. CLAIM CONSTRUCTION
`No claim terms require construction to resolve the invalidity challenges here.
`
`Nidec Motor Corp. v. Zhongshan Broad Ocean Motor Co. Ltd., 868 F.3d 1013, 1017
`
`(Fed. Cir. 2017); Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803
`
`(Fed. Cir. 1999). For purposes of this proceeding only, Petitioner assumes the claims
`
`are not invalid under §112.
`
`VII. GROUND 1A: CLAIMS 1-7 WOULD HAVE BEEN
`OBVIOUS OVER SHIN AND AOYAMA.
`Shin and Aoyama render obvious claims 1-7. (Ex. 1002 ¶¶48-141.)
`
`A. Claim 1
`1[pre]: Output-Content Control Device
`1.
`
`Shin discloses a device that receives a user’s voice input, “generate[s] content
`
`corresponding to a result of analyzing the voice input,” and “provide[s] the generated
`
`content as sound[.]” (Ex. 1003 ¶[0037]; id., Abstract, ¶¶[0003], [0008]-[0011],
`
`-7-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 17 of 90 PageID# 482
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`[0038]-[0039], [0048]-[0060], [0069]-[0072], claims 1, 15, 20.) For example, if a
`
`user says, “Let me know what time it is now,” Shin’s device may respond with, “The
`
`current time is nine ten AM.” (Id. ¶[0037].) The device may be a smartphone, PC,
`
`or home appliance. (Id. ¶[0031].) Figure 1A shows an example of device 100:
`
`
`
`(Id., Fig. 1A.) The device controls the output content. (Infra §§VII.A.5-VII.A.7.)
`
`Thus, Shin discloses an “output-content control device.” (Ex. 1002 ¶49.)
`
`2.
`
`1[a]: Voice Acquiring Unit
`
`Claim element 1[a] recites “a voice acquiring unit configured to acquire a
`
`voice spoken by a user.” The ’675 patent states that a voice acquiring unit may be
`
`part of a controller, which can be a central processing unit (CPU). (Ex. 1001, 3:55-
`
`67, Fig. 2.) Thus, the “voice acquiring unit” refers to a CPU configured to perform
`
`the claimed function, namely, to acquire a voice spoken by a user. (Ex. 1002 ¶51.)
`
`Shin discloses this limitation. (Id. ¶¶50-53.) Shin’s device 101 (orange) in-
`
`cludes a processor 120 (blue) and an audio input module 151 (red):
`
`-8-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 18 of 90 PageID# 483
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`
`(Ex. 1003, Fig. 3, ¶[0045].) Processor 120, which may be a CPU (id. ¶[0029]), “an-
`
`alyze[s] a voice input received through [the] audio input module 151.” (Id. ¶[0048];
`
`id., Abstract, ¶¶[0003]-[0011], [0037]-[0039], [0041], [0060]-[0061], claims 1, 15,
`
`20.) The audio input module 151 can be “implemented with a microphone” to “ob-
`
`tain a user’s speech as a voice input.” (Id. ¶[0062]; id. ¶¶[0037], [0155], [0158].)
`
`Thus, Shin discloses a voice acquiring unit (e.g., audio input module and/or
`
`portion of processor that receives speech) configured to acquire a voice spoken by a
`
`user. (Ex. 1002 ¶¶50-53.)
`
`-9-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 19 of 90 PageID# 484
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`3.
`
`1[b]: Voice Classifying Unit
`
`Claim element 1[b] recites “a voice classifying unit configured to [i] calculate
`
`a distance between the user and the output-content control device by a proximity
`
`sensor to [ii] classify the voice into either a first voice or a second voice based on
`
`the calculated distance.” The “voice classifying unit” refers to a CPU configured to
`
`perform the claimed function. (See Ex. 1001, 3:55-67, Fig. 2; Ex. 1002 ¶55.) Shin
`
`discloses this claim element. (Ex. 1002 ¶¶54-70.)
`
`a.
`
`1[b][i]: Calculates Distance
`
`Shin’s processor “compute[s] the distance between the user and the electronic
`
`device[.]” (Ex. 1003 ¶[0066].) This may be performed based on image data obtained
`
`from a distance detection module. (Id.; see also id. ¶¶[0053] (“processor 120 may
`
`determine a distance between the user and the electronic device 101 based on … the
`
`distance computed, calculated, or measured by the distance detection module 180”),
`
`[0075], [0077], Fig. 5A.) The module may include a proximity sensor, such as “a
`
`depth camera” and/or “various sensors,” such as “an infra-red sensor, an RF sensor,
`
`an ultrasonic sensor, and the like.” (Id. ¶[0066]; id. ¶[0042].) Shin’s Figure 3 shows
`
`the device 101 comprising processor 120 (blue) and distance detection module 180
`
`(green):
`
`-10-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 20 of 90 PageID# 485
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`
`
`(Id., Fig. 3, ¶[0045]; id. ¶¶[0053]-[0054], [0077], [0096], claims 5, 9; Ex. 1002 ¶56.)2
`
`Shin also expressly discloses that the electronic device 101 may use “a prox-
`
`imity sensor.” (Ex. 1003 ¶[0153], Fig. 9 (“proximity sensor” 940G in “sensor mod-
`
`ule” 940), ¶[0143] (device 901 may be included in devices 100, 101); Ex. 1002 ¶57.)
`
`Thus, Shin discloses a voice classifying unit (processor) configured to calcu-
`
`late a distance between the user and the output-content control device by a proximity
`
`sensor. (Ex. 1002 ¶¶56-58.)
`
`
`2 Shin’s electronic device 100 may be implemented with the modules of elec-
`tronic device 101. (Ex. 1003 ¶¶[0041], [0043].)
`
`-11-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 21 of 90 PageID# 486
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`b.
`
`1[b][ii]: Classifies Voice Based on Distance
`
`The ’675 patent explains that, when a proximity sensor is used for calculating
`
`user-to-device distance, the distance can be used as a “feature value” to perform the
`
`classification as the first or second voice. (Ex. 1001, 10:32-38.) Specifically, the
`
`voice classifying unit “sets a threshold of the feature value, and classifies the voice”
`
`as the first or second voice “based on whether the feature value exceeds the thresh-
`
`old.” (Id., 10:42-46.) Shin discloses the same thing.
`
`Shin’s processor analyzes the voice to classify it as either a first or second
`
`voice based on the user-to-device distance. (Ex. 1002 ¶60.) Shin’s processor exe-
`
`cutes a speech recognition application to process the speech input and generates cor-
`
`responding output content. (Ex. 1003 ¶¶[0047], [0060]; id. ¶¶[0037], [0039].) The
`
`processor determines the “output scheme,” including the content, “based on the dis-
`
`tance between the user and the electronic device 101[.]” (Id. ¶¶[0078], [0086]; id.
`
`¶¶[0053], [0077], claim 17.) To do so, Shin’s processor classifies the voice based
`
`on the distance. (Ex. 1002 ¶60.)
`
`As shown in Table 3, Shin’s device provides a different amount of information
`
`based on the user-to-device distance:
`
`-12-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 22 of 90 PageID# 487
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`(Ex. 1003, Table 3 (consolidated).) Shin thus discloses classifying the voice as a
`
`“first voice” when the distance is between 1 to 2 meters, and a “second voice” when
`
`the distance is less than 1 meter:
`
`
`
`
`
`(Id.; id. ¶¶[0051] (content may be “classified dichotomously”), [0086]-[0087]; Ex.
`
`1002 ¶61.) Any distance range beyond 1 meter would satisfy the claimed “first
`
`voice.” (Ex. 1002 ¶61.) Alternatively, all distances over 1 meter could collectively
`
`be a first voice (e.g., a voice greater than 1 meter away). (Id.)
`
`Thus, Shin discloses that the voice classifying unit (processor) is configured
`
`to classify the voice as either a first voice (e.g., 1-2 meters) or a second voice (e.g.,
`
`-13-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 23 of 90 PageID# 488
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`<1 meter) based on the calculated distance. (Ex. 1002 ¶¶59-62.)
`
`Even if PO were to argue that this claim element requires no more than two
`
`classifications, Shin would render this obvious. (Id. ¶¶63-70.) A POSITA would
`
`have been motivated to implement Shin’s amount-of-information table (Table 3)
`
`with only two distance ranges for several reasons.
`
`First, Shin suggests it. Shin discloses dividing content into two classifica-
`
`tions. (Ex. 1003 ¶[0051].) Consequently, a POSITA would have been motivated to
`
`also use two distance ranges, one for each classification. (Ex. 1002 ¶64.)
`
`Second, Shin is “not limited to the example[] of … Table 3.” (Ex. 1003
`
`¶[0090].) This flexibility would have motivated a POSITA to implement a simpler,
`
`two-classification system—such as “near” and “far”—sufficient for applications re-
`
`quiring only basic distinctions. (Ex. 1002 ¶65.)
`
`Third, simplifying complex systems into fewer categories is a routine and
`
`well-established approach to reduce computational load and improve efficiency. (Id.
`
`¶66); KSR Int’l Co. v. Teleflex Inc., 550 U.S. 398, 417 (2007). Binary classification
`
`(having two categories) is a standard approach in many fields, including voice-con-
`
`trolled devices (see, e.g., Ex. 1004, Figs. 7, 12), to simplify user experience. (Ex.
`
`1002 ¶66.) Many consumer devices adopt binary distinctions, such as “low” and
`
`“high” or “near” and “far.” (Id.) A POSITA would have recognized binary classi-
`
`fication as an efficient, user-friendly solution for Shin’s needs. (Id.)
`
`-14-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 24 of 90 PageID# 489
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`Fourth, such a modification reflects merely applying a known technique (bi-
`
`nary classification) to achieve predictable results (content output adjusted based on
`
`“near” or “far” proximity classifications). (Id. ¶67); KSR, 550 U.S. at 417.
`
`Fifth, such a modification reflects applying a known technique (binary classi-
`
`fication) to a known device (Shin’s) that is ready for improvement and yields pre-
`
`dictable results (adjustments to content detail based on proximity threshold). (Ex.
`
`1002 ¶68); KSR, 550 U.S. at 417.
`
`A POSITA would have reasonably expected success in modifying Shin in this
`
`way because doing so would have been trivial, involving configuring Shin’s proces-
`
`sor with revised tables for two proximity-based categories. (Id. ¶69.)
`
`4.
`
`1[c]-1[d]: Intention Analyzing Unit and
`Notification-Information Acquiring Unit
`
`Claim element 1[c] recites “an intention analyzing unit configured to analyze
`
`the voice acquired by the voice acquiring unit to detect intention information indi-
`
`cating what kind of information is wished to be acquired by the user.” Claim element
`
`1[d] recites “a notification-information acquiring unit configured to acquire notifi-
`
`cation information which includes content information as a content information to
`
`be notified to the user based on the intention information.”
`
`The intention analyzing unit and notification-intention acquiring unit may be
`
`parts of a controller or CPU. (Ex. 1001, 3:55-67, Fig. 2.) The “intention analyzing
`
`unit” may use text data from a voice analyzing unit (e.g., “today’s schedule is”) and
`
`-15-
`
`

`

`Case 3:24-cv-00540-MHL Document 49-2 Filed 04/08/25 Page 25 of 90 PageID# 490
`IPR Petition – Patent 11,244,675
`Amazon.com, Inc., et al. v. SoundClear Technologies LLC
`
`determine what information the user wishes to acquire (e.g., Mr. Yamada’s sched-
`
`ule). (Id., 4:15-5:57, 9:22-24.) Then, the “notification-intention acquiring unit” ob-
`
`tains notification information (e.g., meeting, Mr. Yoshida, 15 o’clock, Tokyo build-
`
`ing) based on that desired information. (Id., 5:4-6:23.) The ’675 patent admits that
`
`performing these functions was known (id., 1:20-27; Ex. 1002 ¶72) and, conse-
`
`quently, these limitations cannot make the claim patentable. Regardless, Shin dis-
`
`closes and renders obvious these claim elements. (Ex. 1002 ¶¶71-79.)
`
`Shin’s processor analyzes the voice acquired by the voice acquiring unit. (Ex.
`
`1003 ¶¶[0048], [0062].) Shin’s processor executes a speech recognition application
`
`to process the speech input and generate corresponding content for output by ana-
`
`lyzing the request. (Id. ¶¶[0048], [0058]-[0060], [0084], [0089], [0037], [0058],
`
`[0067], [0071], [0088], [0103], [0107], [0120], [0126]-[0127], [0141]; Ex. 1002
`
`¶73.)
`
`Shin discloses analyzing the acquired voice “to detect intention information”
`
`and acquiring “notification information … based on the intention information” in
`
`the same way as the ’675 patent. (Ex. 1002 ¶74.) For example, Shin describes ana-
`
`lyzing a user’s speech, such as “Let me know today’s weather,” to determine inten-
`
`tion information (e.g., weather) and then acquiring notification information (e

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket