`571-272-7822
`
`Paper 37
`Date: September 9, 2021
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`____________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`____________
`
`APPLE INC.,
`Petitioner,
`
`v.
`
`PARUS HOLDINGS, INC.,
`Patent Owner.
`____________
`
`IPR2020-00686
`Patent 7,076,431 B2
`____________
`
`
`
`Before DAVID C. MCKONE, STACEY G. WHITE, and
`SHELDON M. MCGEE, Administrative Patent Judges.
`
`MCKONE, Administrative Patent Judge.
`
`
`
`
`JUDGMENT
`Final Written Decision
`Determining No Challenged Claims Unpatentable
`35 U.S.C. § 318(a)
`Denying Patent Owner’s Motion to Exclude
`37 C.F.R. § 42.64
`
`
`
`
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`INTRODUCTION
`I.
`Background and Summary
`A.
`Apple Inc. (“Petitioner”) filed a Petition requesting an inter partes
`review of claims 1–7, 9, 10, 13, 14, 18–21, and 25–30 of U.S. Patent
`No. 7,076,431 B2 (Ex. 1001, “the ’431 patent”). Paper 1 (“Pet.”). Parus
`Holdings, Inc. (“Patent Owner”) filed a Preliminary Response to the
`Petition. Paper 6 (“Prelim. Resp.”). Pursuant to 35 U.S.C. § 314, we
`instituted this proceeding. Paper 9 (“Dec.”).
`Patent Owner filed a Patent Owner’s Response (Paper 15, “PO
`Resp.”), Petitioner filed a Reply to the Patent Owner’s Response (Paper 19,
`“Reply”), and Patent Owner filed a Sur-reply to the Reply (Paper 21, “Sur-
`reply”). Patent Owner filed a Motion to Exclude certain evidence submitted
`by Petitioner (Paper 29, “Mot. Excl.”), to which Petitioner filed an
`Opposition (Paper 30, “Opp. Mot. Excl.”). Patent Owner filed a Reply to
`Petitioner’s Opposition to its Motion to Exclude (styled a “Sur-reply”).
`Paper 32 (“Reply Mot. Excl.”). An oral argument was held in this
`proceeding and IPR2020-00687 on June 22, 2021. Paper 36 (“Tr.”).
`We have jurisdiction under 35 U.S.C. § 6. This Decision is a final
`written decision under 35 U.S.C. § 318(a) as to the patentability of claims 1–
`7, 9, 10, 13, 14, 18–21, and 25–30. Based on the record before us, Petitioner
`has not proved, by a preponderance of the evidence, that claims 1–7, 9, 10,
`13, 14, 18–21, and 25–30 are unpatentable.
`We also deny Patent Owner’s Motion to Exclude.
`
`
`
`Related Matters
`B.
`The parties identify the following district court proceedings as related
`to the ’431 patent: Parus Holdings Inc. v. Apple, Inc., No. 6:19-cv-00432
`
`2
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`(W.D. Tex.) (“the Texas case”); Parus Holdings Inc. v. Amazon.com, Inc.,
`No. 6:19-cv-00454 (W.D. Tex.); Parus Holdings Inc. v. Samsung
`Electronics Co., Ltd., No. 6:19-cv-00438 (W.D. Tex.); Parus Holdings Inc.
`v. Google LLC, No. 6:19-cv-00433 (W.D. Tex.); and Parus Holdings Inc. v.
`LG Electronics, Inc., No. 6:19-cv-00437 (W.D. Tex.). Pet. 72; Paper 5, 1.
`The parties also identify U.S. Patent No. 6,721,705 and U.S. Patent
`No. 9,451,084 as related to the ’431 patent, and further identify that U.S.
`Patent No. 9,451,084 has been asserted in the district court proceedings
`listed above, and is the subject of IPR2020-00687. Pet. 72; Paper 5, 1.
`
`The ’431 Patent
`C.
`The ’431 patent describes a system that allows users to browse web
`sites and retrieve information using conversational voice commands.
`Ex. 1001, 1:20–23. Figure 1, reproduced below, illustrates an example:
`
`
`
`3
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Figure 1 is a block diagram of a voice browsing system. Id. at 4:16–17.
`Figure 3, reproduced below, shows additional details of media server 106, a
`component shown in Figure 1:
`
`
`Figure 3 is a block diagram of Figure 1’s media server 106. Id. at 4:20–21.
`
`4
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Media server 106 includes speech recognition engine 300, speech
`synthesis engine 302, Interactive Voice Response (IVR) application 304, call
`processing system 306, and telephony and voice hardware 308 to
`communicate with Public Switched Telephone Network (PTSN) 116. Id. at
`5:62–6:1. When a user speaks into voice enable device 112 (e.g., a wireline
`or wireless telephone), speech recognition engine 300 converts voice
`commands into data messages. Id. at 6:4–8. Media server 106 uses results
`(e.g., keywords) generated by speech recognition engine 300 to retrieve web
`site record 200 stored in database 100 that can provide the information
`requested by the user. Id. at 6:44–50. Media server 106 selects the web site
`record of highest rank and transmits it to web browsing server 102 along
`with an identifier indicating what information is being requested. Id. at
`6:52–56. Speech synthesis engine converts the data retrieved by web
`browsing server 102 into audio messages that are transmitted to voice enable
`device 112. Id. at 6:57–60.
`According to the ’431 patent, with its system,
`[u]sers are not required to learn a special language or command
`set in order to communicate with the voice browsing system of
`the present invention. Common and ordinary commands and
`phrases are all that is required for a user to operate the voice
`browsing system. The voice browsing system recognizes
`naturally spoken voice commands and is speaker-independent;
`it does not have to be trained to recognize the voice patterns of
`each individual user. Such speech recognition systems use
`phonemes to recognize spoken words and not predefined voice
`patterns.
`Id. at 4:34–43.
`
`5
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Claim 1, reproduced below, is illustrative of the invention:1
`1. A system for retrieving information from pre-selected
`web sites by uttering speech commands into a voice enabled
`device and for providing to users retrieved information in an
`audio form via said voice enabled device, said system
`comprising:
`[a] a computer, said computer operatively connected to
`the internet;
`[b] a voice enabled device operatively connected to said
`computer, said voice enabled device configured to
`receive speech commands from users;
`[c] at least one speaker-independent speech recognition
`device, said speaker-independent speech
`recognition device operatively connected to said
`computer and to said voice enabled device;
`[d] at least one speech synthesis device, said speech
`synthesis device operatively connected to said
`computer and to said voice enabled device;
`[e] at least one instruction set for identifying said
`information to be retrieved, said instruction set
`being associated with said computer, said
`instruction set comprising:
`a plurality of pre-selected web site addresses, each
`said web site address identifying a web site
`containing said information to be retrieved;
`[f] at least one recognition grammar associated with said
`computer, each said recognition grammar
`corresponding to each said instruction set and
`corresponding to a speech command;
`[g] said speech command comprising an information
`request selectable by the user;
`
`
`1 For consistency with the parties’ arguments, we add bracketed lettering to
`track the lettering supplied by Petitioner. See Pet. 74–79 (Claims Listing
`Appendix).
`
`6
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`[h] said speaker-independent speech recognition device
`configured to receive from users via said voice
`enabled device said speech command and to select
`the corresponding recognition grammar upon
`receiving said speech command;
`[i] said computer configured to retrieve said instruction
`set corresponding to said recognition grammar
`selected by said speaker-independent speech
`recognition device;
`[j] said computer further configured to access at least one
`of said plurality of web sites identified by said
`instruction set to obtain said information to be
`retrieved,
`[k] said computer configured to first access said first web
`site of said plurality of web sites and, if said
`information to be retrieved is not found at said first
`web site, said computer configured to sequentially
`access said plurality of web sites until said
`information to be retrieved is found or until said
`plurality of web sites has been accessed;
`[l] said speech synthesis device configured to produce an
`audio message containing any retrieved
`information from said pre-selected web sites, and
`said speech synthesis device further configured to
`transmit said audio message to said users via said
`voice enabled device.
`
`
`Evidence
`D.
`Petitioner relies on the references listed below.
`Reference
`Date
`
`Exhibit
`No.
`July 31, 2001 1004
`
`Ladd
`
`US 6,269,336 B1
`
`7
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Reference
`
`Date
`
`Kurosawa2
`
`JP H9-311869 A
`
`Dec. 2, 1997
`
`Exhibit
`No.
`1005
`
`Goedken
`
`US 6,393,423 B1
`
`Madnick
`
`US 5,913,214
`
`Houser
`
`US 5,774,859
`
`Rutledge
`
`US 6,650,998 B1
`
`May 21, 2002 1006
`
`June 15, 1999 1007
`
`June 30, 1998 1008
`
`Nov. 18, 2003 1010
`
`
`Petitioner also relies on the Declaration of Loren Terveen, Ph.D.
`(Ex. 1003) and the Supplemental Declaration of Dr. Terveen (Ex. 1040).
`Patent Owner relies on the Declaration of Benedict Occhiogrosso
`(Ex. 2025).
`
`
`The Instituted Grounds of Unpatentability
`E.
`Claims Challenged
`35 U.S.C. §
`References
`1–6, 9, 10, 13, 14, 18,
`103(a)3
`Ladd, Kurosawa, Goedken
`20, 21, 25
`Ladd, Kurosawa, Goedken,
`7, 19, 26–30
`Madnick
`Ladd, Kurosawa, Goedken,
`Houser
`Ladd, Kurosawa, Goedken,
`Rutledge
`
`103(a)
`
`103(a)
`
`103(a)
`
`5, 6
`
`9, 25
`
`
`
`
`2 We rely on the certified translation of JP H09-311869 (Ex. 1005).
`3 The Leahy-Smith America Invents Act (“AIA”), Pub. L. No. 112-29, 125
`Stat. 284, 287–88 (2011), amended 35 U.S.C. § 103. Because the ’431
`patent was filed before March 16, 2013, the effective date of the relevant
`amendment, the pre-AIA version of § 103 applies.
`
`8
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`II. ANALYSIS
`Claim Construction
`A.
`For petitions filed after November 13, 2018, we construe claims
`“using the same claim construction standard that would be used to construe
`the claim in a civil action under 35 U.S.C. 282(b), including construing the
`claim in accordance with the ordinary and customary meaning of such claim
`as understood by one of ordinary skill in the art and the prosecution history
`pertaining to the patent.” 37 C.F.R. § 42.100(b) (2019); see also Phillips v.
`AWH Corp., 415 F.3d 1303 (Fed. Cir. 2005) (en banc).
`In the Petition, Petitioner contended that we should give the claim
`terms their plain and ordinary meaning, and did not identify any claim term
`for construction. Pet. 11.
`In the Institution Decision, we made clear that the plain and ordinary
`meaning of “instruction set,” as recited in each of the independent claims,
`does not require “[a] set of machine language instructions that a processor
`executes,” rejecting Patent Owner’s arguments to the contrary. Dec. 22–23;
`Prelim. Resp. 48.
`After the pre-institution briefing was completed, but before we issued
`the Institution Decision, the court in the Texas case issued a claim
`construction ruling, construing “speaker-independent speech recognition
`device” to mean “speech recognition device that recognizes spoken words
`without adapting to individual speakers or using predefined voice patterns.”
`Ex. 1041, 2. 4 The parties agree that this term at least requires a “speech
`
`
`4 The court in the Texas case issued other constructions pertaining to the
`challenged claims, but the parties do not advance them in this proceeding
`and we do not find it necessary to adopt them in order to resolve the parties’
`dispute.
`
`9
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`recognition device that recognizes spoken words without . . . using
`predefined voice patterns,” but disagree as to whether it should require a
`device that recognizes spoken words “without adapting to individual
`speakers.” PO Resp. 21 (“The proper construction of ‘speaker-independent
`speech recognition device’ is consistent with the construction issued by the
`Western District of Texas, though it does not include all of that court’s
`construction, and requires at least ‘speech recognition device that recognizes
`spoken words without using predefined voice patterns.’”); Reply 2 (“For
`purposes of this IPR, Apple submits the Court’s construction should be
`applied.”).
`The dispute as to whether the term should preclude adapting to
`individual speakers does not impact any issue in this proceeding, and
`Petitioner has agreed to Patent Owner’s construction in this proceeding, so
`long as we do not resolve the dispute over adapting to individual speakers.
`Tr. 12:24–13:4 (“JUDGE McKONE: So you’d be happy if we essentially
`adopted Parus’s construction with a footnote or some kind of note that we’re
`not resolving the issue of adapting to individual speakers? MS. BAILEY:
`That would be fine for purposes of this IPR, Your Honor.”). We adopt the
`parties’ agreed approach. For purposes of this proceeding, “speaker-
`independent speech recognition device” means “speech recognition device
`that recognizes spoken words without using predefined voice patterns.” This
`is consistent with the ’431 patent’s statement (relied on by both parties) that
`“[t]he voice browsing system recognizes naturally spoken voice commands
`and is speaker-independent; it does not have to be trained to recognize the
`voice patterns of each individual user. Such speech recognition systems use
`phonemes to recognize spoken words and not predefined voice patterns.”
`Ex. 1001, 4:38–43; see also PO Resp. 21–22 (citing Ex. 1001, 4:34–43);
`
`10
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Reply 2–3 (citing Ex. 1001, 4:38–43). We take no position on whether the
`construction also should include “without adapting to individual speakers.”
`Based on the record before us, we do not find it necessary to provide
`express claim constructions for any other terms. See Nidec Motor Corp. v.
`Zhongshan Broad Ocean Motor Co., 868 F.3d 1013, 1017 (Fed. Cir. 2017)
`(noting that “we need only construe terms ‘that are in controversy, and only
`to the extent necessary to resolve the controversy’”) (quoting Vivid Techs.,
`Inc. v. Am. Sci. & Eng’g, Inc., 200 F.3d 795, 803 (Fed. Cir. 1999)).
`
`B. Obviousness of Claims 1–6, 9, 10, 13, 14, 18, 20, 21, and 25
`over Ladd, Kurosawa, and Goedken
`Petitioner contends that claims 1–6, 9, 10, 13, 14, 18, 20, 21, and 25
`would have been obvious over Ladd, Kurosawa, and Goedken. Pet. 17–61.
`For the reasons given below, Petitioner has not shown obviousness by a
`preponderance of the evidence.
`A claim is unpatentable under 35 U.S.C. § 103 (pre-AIA) if the
`differences between the claimed subject matter and the prior art are “such
`that the subject matter as a whole would have been obvious at the time the
`invention was made to a person having ordinary skill in the art to which said
`subject matter pertains.” We resolve the question of obviousness on the
`basis of underlying factual determinations, including (1) the scope and
`content of the prior art; (2) any differences between the claimed subject
`matter and the prior art; (3) the level of skill in the art; and (4) objective
`evidence of nonobviousness, i.e., secondary considerations. 5 See Graham v.
`John Deere Co., 383 U.S. 1, 17–18 (1966).
`
`5 The record does not include allegations or evidence of objective indicia of
`nonobviousness or obviousness.
`
`11
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Level of Skill in the Art
`1.
`Petitioner, relying on the testimony of Dr. Terveen, contends that a
`person of ordinary skill in the art “would have had a Bachelor’s degree in
`Electrical Engineering, Computer Engineering, Computer Science, or
`equivalent degree, with at least two years of experience in interactive voice
`response systems, automated information retrieval systems, or related
`technologies, such as web-based information retrieval systems.” Pet. 6
`(citing Ex. 1003 ¶ 28). Patent Owner does not contest Petitioner’s proposal
`or offer an alternative. Also, neither party argues that the outcome of this
`case would differ based on our adoption of any particular definition of one
`of ordinary skill in the art. Petitioner’s proposal is consistent with the
`technology described in the Specification and the cited prior art. On the
`complete record, we adopt Petitioner’s proposed level of skill.
`
`
`2.
`
`Scope and Content of the Prior Art
`a) Overview of Ladd
`Ladd describes a voice browser for allowing a user to access
`information from an information source. Ex. 1004, 1:20–25. Figure 3,
`reproduced below, illustrates an example:
`
`12
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`
`
`Figure 3 is a block diagram of a system that enables a user to access
`information. Id. at 4:62–64.
`A user accesses electronic network 206 by dialing a telephone number
`from communication device 202 (e.g., a landline or wireline device, or a
`wireless device). Id. at 5:20–23, 5:29–35. Communication node 212
`answers the incoming call from carrier network 216 and plays an
`announcement to the user. Id. at 6:13–17. In response to audio inputs from
`the user, communication node 212 retrieves information from content
`providers 208 and 209. Id. at 6:17–21. For example, voice recognition
`(VRU) client 232 generates pre-recorded voice announcements and
`messages to prompt the user to provide inputs using speech commands.
`Id. at 7:48–51. VRU client 232 receives and processes speech
`communications and routes them to VRU server 234, which processes the
`communications and compares them to a vocabulary or grammar stored in
`database server unit 244. Id. at 8:3–9, 8:55–61.
`
`13
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`According to Ladd,
`The ASR [automatic speech recognition] unit 254 of the VRU
`server 234 provides speaker independent automatic speech
`recognition of speech inputs or communications from the
`user. . . . The ASR unit 254 processes the speech inputs from
`the user to determine whether a word or a speech pattern
`matches any of the grammars or vocabulary stored in the
`database server unit 244 or downloaded from the voice browser.
`When the ASR unit 254 identifies a selected speech pattern of
`the speech inputs, the ASR unit 254 sends an output signal to
`implement the specific function associated with the recognized
`voice pattern. The ASR unit 254 is preferably a speaker
`independent speech recognition software package, Model No.
`RecServer, available from Nuance Communications. It is
`contemplated that the ASR unit 254 can be any suitable speech
`recognition unit to detect voice communications from a user.
`Id. at 9:27–44.
`After receiving information from content providers 208, 209,
`communication node 212 provides a response to the user based on the
`retrieved information. Id. at 6:21–24. Specifically, text-to-speech (TTS)
`unit 252 of VRU server 234 receives textual data (e.g., web pages) from
`application server unit 242, processes the textual data to voice data, and
`provides the voice data to VRU client 232, which reads or plays the voice
`data to the user. Id. at 9:1–23.
`
`
`b) Overview of Kurosawa
`Kurosawa describes an Internet search server that obtains requested
`information from a plurality of URLs, and delivers a search report to a
`client. Ex. 1005, Abst. Figure 2 of Kurosawa, reproduced below, illustrates
`an example:
`
`14
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`
`Figure 2 is a functional block diagram of an Internet search server. Id. ¶ 20.
`Internet search server 10 includes URL database 11, which has a
`comparison table (URL table 22, shown in Figure 6) that compares a
`plurality of keywords representing search condition elements to URLs that
`relate to the keywords. Id. ¶¶ 20–21. Figure 5 is reproduced below:
`
`15
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`
`Figure 5 is a picture of keyword table 21 in URL database 11. Id. ¶ 21.
`According to Kurosawa, “anything that is not listed in the keyword table 21
`cannot be searched for.” Id. Figure 6 is reproduced below:
`
`16
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`
`Figure 6 is a picture of URL table 22 in URL database 11. Id. ¶ 21. Search
`server 10 regularly updates URL table 22 in URL database 11 using
`automatic search tools, such as Internet web crawlers. Id. ¶ 23.
`When a client sends a search request to Internet search server 10,
`search condition element extraction unit 13 extracts search condition
`elements from the client’s search request, and URL search unit 14 extracts
`keywords (included in the search condition elements) that match those of
`keyword table 21 and selects URLs (from URL table 22) having the
`extracted keywords listed therein. Id. ¶¶ 26–28. URL listing order
`arranging unit 15 determines a listing order for the selected URLs based on
`priority conditions for efficient searching. Id. ¶ 29. Thereafter, URL listing
`unit 16 sequentially lists the addresses of the respective URLs in the
`
`17
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`determined order, and accesses the respective webpages of the URLs.
`Id. ¶ 30. URL information gathering unit 17 sequentially accumulates
`information from the URL pages for presentation to the client. Id. ¶¶ 30–31.
`
`
`c) Overview of Goedken
`Goedken describes a method and apparatus for facilitating information
`exchange, via a network, between an information requestor/searcher and one
`or more information custodians, who are persons that “know[] where to
`locate and/or ha[ve] custody of the information that interests the searcher.”
`Ex. 1006, Abst, 1:42–44. The searcher creates an information request
`message and sends it to the apparatus, and the apparatus determines an
`appropriate custodian and sends a request message to that custodian. Id.
`The identified custodian replies to the request message with an intermediate
`answer message or with a reroute message. Id. Based on the messages, the
`apparatus provides a final answer message to the searcher, and may also
`record the answer message for subsequent retrieval. Id. For example, the
`apparatus may record portions of final answer messages developed by
`information custodians and store those records in a knowledge database.
`Id. at 19:43–48. “Preferably, the knowledge database 136 is populated by
`earlier questions and answers routed through the apparatus 10, as well as any
`number of preprogrammed questions and answers (e.g., an existing help line
`database).” Id. at 25:15–19.
`Petitioner relies on the embodiment of Goedken relating to searching
`the knowledge database for previously stored answers. Pet. 41–44 (citing
`Ex. 1006, 25:9–26:23, Fig. 18). Figure 18 of Goedken, reproduced below,
`illustrates this embodiment:
`
`18
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`
`Figure 18 is a flowchart of a program implementing the apparatus of
`Goedken’s Figure 1 (an apparatus for facilitating information exchange
`
`
`
`19
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`between an information requester and an information custodian via a
`network). Ex. 1006, 7:39–42, 8:13–15.
`“Once the category of a received information request message 18 has
`been determined, the database manager 140 is activated to search the
`knowledge database 136 for a responsive answer.” Id. at 25:19–22. The
`database manager retrieves the category associated with a first file from the
`knowledge database (block 332). Id. at 15:24–26. The database manager
`compares the retrieved category to the requested category (block 334) and, if
`there is no match, the database manager determines whether there are more
`files to consider (block 336). Id. at 25:26–32. If there are more files to
`consider (block 338), the category of the next file is retrieved and compared
`to the category of the file (blocks 334, 336). Id. at 25:32–35. “The database
`manager 140 continues to loop through blocks 332–338 until all of the
`categories of all of the files in the knowledge database 136 are compared to
`the category associated with the information request message 18 or until a
`match is found at block 334.” Id. at 25:35–40.
`After a file corresponding to the category has been found, Goedken’s
`algorithm similarly loops through a set of “synonyms” for the user’s
`question to identify whether there is a match for those synonyms in the
`identified file (blocks 340, 342, 344, 348, 348). Id. at 25:4–26:7. “If a
`question synonym is found at block 344, the database manager 140 passes
`the answer associated with that file to the message composer 122, and the
`message composer 122 preferably attaches the ‘canned’ answer from the
`knowledge database 136 to the information request message 18 (block
`350).” Id. at 26:8–13.
`
`
`20
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`Claims 1–6, 9, 10, 13, 14, 18, 20, 21, and 25, Differences
`3.
`Between the Claimed Subject Matter and Ladd, Kurosawa, and
`Goedken
`The Parties’ Contentions for Claim 1
`a)
`Petitioner contends that Ladd teaches claim limitations 1[a]–1[d],
`1[f]–1[j], and 1[l]; that Kurosawa teaches limitation 1[e] and aspects of
`limitation 1[i]; and that Goedken teaches limitation 1[k].
`Regarding the preamble of claim 1, Petitioner contends that Ladd
`describes a system for retrieving information by uttering speech commands
`into a voice-enabled device and for providing retrieved information in audio
`form to users via the voice-enabled device. Pet. 17–18 (citing Ex. 1004,
`1:22–25, 2:19–64, 3:8–23, 3:40–53, 3:58–4:3, 4:62–64, 5:30–36, 9:1–10,
`9:19–21, 11:50–63, Figs. 1, 3). As to claim elements 1[a] and 1[b],
`Petitioner maps Ladd’s communication node 212 to the “computer
`operatively connected to the internet” and Ladd’s communication devices
`201, 202, 203, 204 to “a voice enabled device operatively connected to said
`computer.” Id. at 18–22 (citing Ex. 1004, 1:48–54, 1:61–64, 2:59–64, 4:62–
`5:11, 5:12–39, 6:50–55, 7:7–17, 7:24–32, 7:52–56, 10:34–36).
`Petitioner contends that Ladd’s ASR 254 within VRU server 234 is
`“at least one speaker-independent speech recognition device,” as recited in
`claim element 1[c]. Id. at 22–23 (citing Ex. 1004, 6:65–7:7, 7:28–33, 8:19–
`28, 8:55–67, 9:1–3, 9:28–44). As to claim element 1[d], Petitioner contends
`that Ladd’s TTS unit 252 within VRU server 234 is “at least one speech
`synthesis device.” Id. at 23–25 (citing Ex. 1004, 3:40–57, 4:51–5:20, 5:24–
`29, 5:34–35, 7:28–33, 8:55–56, 9:1–23).
`Regarding claim element 1[f], Petitioner, inter alia, points to Ladd’s
`description of VRU server 234 “process[ing] the speech communications
`
`21
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`and compar[ing] the speech communications against a vocabulary or
`grammar.” Id. at 32–34 (quoting Ex. 1004, 8:55–61; citing id. at 4:36–49,
`9:28–44, 10:12–17, 14:13–28, 19:12–36). As to claim element 1[g],
`Petitioner argues that Ladd describes a user speaking a request to access
`information such as news, weather, and traffic. Id. at 34–35 (citing
`Ex. 1004, 2:48–58, 4:62–5:11, 7:49–56, 10:58–66). As to claim element
`1[h], Petitioner argues that Ladd describes VRU server 234 (including ASR
`unit 254) receiving speech commands from a communication device and
`determining whether a word or speech pattern matches any of the grammars
`or vocabulary stored in database server unit 244. Id. at 35–36 (citing
`Ex. 1004, 4:62–5:35, 6:65–7:7, 7:27-32, 8:3–28, 8:55–58, 9:1–3, 9:28–39).
`As to claim element 1[l], Petitioner argues that the text TTS unit 252
`converts to speech can be information retrieved from web sites. Id. at 47–48
`(citing Ex. 1004, 4:51–5:36, 6:13–25, 7:27–33, 8:55–56, 9:1–26).
`As to claim element 1[e], Petitioner contends that Kurosawa teaches
`this limitation. Pet. 25–32. In particular, Petitioner contends that URL table
`22, shown in Kurosawa’s Figure 6 (reproduced above), illustrates a plurality
`of web site addresses, each matching keywords in a user’s search condition
`and identifying a web site containing information to be retrieved related to
`the keywords. Id. at 25–29 (citing Ex. 1005 ¶¶ 9, 11, 12, 21, 24, 27, 28, 37).
`Petitioner contends that the URLs are “pre-selected” because they are
`known, cross-referenced to keywords, and stored in URL database 11 before
`the search. Id. at 28 (citing Ex. 1005 ¶¶ 20–21). Petitioner contends that the
`“instruction set” is Kurosawa’s plurality of URLs picked out based on
`keyword matching, and argues that this instruction set is associated with
`search server 10 shown in Kurosawa’s Figure 1. Id. at 29 (citing Ex. 1005
`
`22
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`¶ 21). Petitioner contends that Ladd’s system would have been modified to
`include the plurality of URLs in Kurosawa’s database. Id. at 29–30.
`Regarding claim element 1[i], Petitioner contends that Ladd describes
`communication node 212 (including ASR unit 254) as monitoring speech
`commands to detect keywords (such as “weather”) corresponding to
`information the user desires. Pet. 36–37 (citing Ex. 1004, 4:36–49, 5:37–39,
`6:14–29, 7:52–56, 8:55–67, 9:35–39, 11:50–63). Petitioner pairs this
`teaching with Kurosawa, which Petitioner contends teaches accessing a
`plurality of pre-selected URLs from a database table to sequentially access
`websites to retrieve information desired by users. Id. (citing Ex. 1005 ¶¶ 9,
`10, 15, 20, 21). As to claim element 1[j], Petitioner argues that Ladd teaches
`accessing websites based on speech commands and that Kurosawa teaches
`sequentially accessing URLs to gather information. Id. at 37–39 (citing
`Ex. 1004, 3:7–39, 4:37–49, 6:18–25, 6:65–7:7, 7:44–56, 11:31–36, 14:1–9;
`Ex. 1005 ¶¶ 9, 15, 40).
`As to claim element 1[k], Petitioner contends that Kurosawa teaches
`sequentially accessing the URL addresses listed in URL table 22 in a priority
`order determined by URL listing order arranging unit 15. Pet. 39–40 (citing
`Ex. 1005 ¶¶ 15, 29, 35, 40). According to Petitioner,
`Goedken discloses a procedure that accesses a first file of a
`plurality of files for an answer to a question. If the information
`to be retrieved is not found at the first file (it fails to match the
`category or synonym), the procedure sequentially accesses the
`next file of the plurality of files until the information to be
`retrieved is found (matching both the category and synonym) or
`until all files have been accessed via repeated application.
`Id. at 43–44 (citing Ex. 1006, 25:59–26:7; Ex. 1003 ¶¶ 119–120). Petitioner
`contends that, in combination, “[t]he Ladd system as further modified by
`Goedken sequentially accesses the plurality of preselected websites
`
`23
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`efficiently and quickly via the Goedken procedure, which returns an answer
`once found or continues accessing information sources (websites, when
`applied to the system of Ladd modified by Kurosawa), until all websites are
`accessed.” Id. at 46–47.
`
`Patent Owner argues that Goedken does not teach sequentially
`accessing pre-selected web sites (PO Resp. 38–39); that Kurosawa does not
`teach sequentially accessing pre-selected web sites until requested
`information is found or all pre-selected web sites have been accessed (id. at
`40–41); that Petitioner’s obviousness combinations are based on
`impermissible hindsight (id. at 41–43); that Petitioner has not shown a
`motivation to combine Ladd and Kurosawa (id. at 43–45); that Petitioner has
`not shown a motivation to combine Goedken with Ladd and Kurosawa
`(id. at 46–48); and that the prior art teaches away from the proposed
`combination (id. at 48–56). Patent Owner contends that Ladd does not teach
`the “speaker-independent speech recognition device” of claim limitation
`1[c]. Id. at 34–38; Sur-reply 2–18. For the reasons given below, Petitioner
`has not made the requisite showing as to claim limitation 1[c]; thus, it is
`unnecessary to resolve the remaining disputes raised by Patent Owner.
`
`
`Petitioner has not shown that Ladd teaches the
`b)
`“speaker-independent speech recognition device” of
`claim limitation 1[c]
`Claim limitation 1[c] recites “at least one speaker-independent speech
`recognition device, said speaker-independent speech recognition device
`operatively connected to said computer and to said voice enabled device.”
`(emphasis added). The parties dispute whether Ladd teaches this limitation.
`
`24
`
`
`
`IPR2020-00686
`Patent 7,076,431 B2
`In the Petition, Petitioner contended that Ladd’s ASR 254 is a
`“speaker-independent speech recognition device.” Pet. 22. Petitioner (id. at
`22–23) referred to Ladd’s statement that “[t]he ASR unit 254 of the VRU
`server 234 provides speaker independent automatic speech recognition of
`speech inputs or communications from the user.” Ex. 1004, 9:27–29.
`Petitioner also expressly quoted Ladd’s