`
`IN THE UNITED STATES DISTRICT COURT
`FOR THE DISTRICT OF DELAWARE
`
`IPA TECHNOLOGIES INC.,
`
`Plaintiff,
`
`v.
`
`MICROSOFT CORPORATION,
`
`Defendant.
`
`C.A. No. 18-01 (RGA)
`
`JURY TRIAL DEMANDED
`
`SECOND AMENDED COMPLAINT FOR PATENT INFRINGEMENT
`
`Pursuant to Fed. R. Civ. P. 15(a)(2), Plaintiff IPA Technologies Inc. (“IPA”) for
`
`its Second Amended Complaint against Defendant Microsoft Corporation (“Microsoft”
`
`or “Defendant”) alleges as follows:
`
`PARTIES
`
`1.
`
`IPA is a Delaware corporation with a principal place of business at 600
`
`Anton Blvd., Suite 1350, Costa Mesa, California 92626.
`
`2.
`
`On information and belief, Defendant Microsoft is a Delaware corporation
`
`with a principal place of business at One Microsoft Way, Redmond, Washington 98052.
`
`Microsoft can be served with process via its registered agent, The Corporation Trust
`
`Company, Corporation Trust Center 1209 Orange Street, Wilmington, Delaware 19808.
`
`JURISDICTION AND VENUE
`
`3.
`
`This action arises under the patent laws of the United States, Title 35 of
`
`the United States Code. Accordingly, this Court has subject matter jurisdiction under 28
`
`U.S.C. §§ 1331 and 1338(a).
`
`4.
`
`This Court has specific and general personal jurisdiction over Defendant
`
`pursuant to due process and/or the Delaware Long Arm Statute, due to Defendant having
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 2 of 112 PageID #: 315
`
`availed itself of the rights and benefits of Delaware by incorporating under Delaware law
`
`and due to its substantial business in this forum, including: (i) at least a portion of the
`
`infringement alleged herein; and (ii) regularly doing or soliciting business, engaging in
`
`other persistent courses of conduct, and/or deriving substantial revenue from goods and
`
`services provided to individuals in Delaware and in this Judicial District.
`
`5.
`
`Venue is proper in this District under 28 U.S.C. §§ 1391 (b)-(c) and
`
`1400(b) because Defendant is resident in this District as it is incorporated in Delaware.
`
`BACKGROUND
`
`6.
`
`SRI International, Inc. (“SRI”), the original owner of the patents-in-suit, is
`
`an independent, not-for-profit research institute that conducts client-supported research
`
`and development for government agencies, commercial businesses, foundations, and
`
`other organizations.
`
`7.
`
`SRI employs about 2,100 people worldwide, including scientists,
`
`engineers, technologists, policy researchers, and corporate and support staff. SRI works
`
`with clients to take the most advanced R&D from the laboratory to the marketplace. SRI
`
`collaborates across technical and scientific disciplines to generate real innovation and
`
`create value by inventing solutions that solve challenging problems and looks ahead to
`
`the needs of the future. For more than 70 years, SRI has led the discovery and design of
`
`ground-breaking products, technologies, and industries—from the computer mouse and
`
`intelligent personal assistants to robotic surgery, medical ultrasound, cancer treatments,
`
`and more. The revenue generated by SRI’s R&D projects, commercialization activities,
`
`and marketplace solutions is reinvested in SRI capabilities, facilities, and staff to advance
`
`its mission.
`
`2
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 3 of 112 PageID #: 316
`
`8.
`
`Among its many areas of research, SRI has engaged in fundamental
`
`research and development related to personal digital assistants and speech-based
`
`navigation of electronic data sources.
`
`9.
`
`SRI’s innovative work on personal digital assistants was a key area of
`
`development in one of the world’s largest artificial intelligence projects, the Cognitive
`
`Assistant that Learns and Organizes (“CALO”). The vision for the SRI-led CALO
`
`project, which was funded by the U.S. Defense Advanced Research Projects Agency
`
`(“DARPA”), was to create groundbreaking software that could revolutionize how
`
`computers support decision-makers.
`
`10.
`
`SRI’s work on personal digital assistants and speech-based navigation of
`
`electronic data sources, which started before the launch of the CALO project, developed
`
`further as part of the project. SRI’s engineers were awarded numerous patents on their
`
`groundbreaking personal digital assistant and speech-based navigation inventions.
`
`11.
`
`To bring the personal digital assistant and speech-based navigation
`
`technology to the marketplace, SRI formed the spin-off company Siri, Inc. in 2007, and
`
`granted it a non-exclusive license to the patent portfolio. The technology was
`
`demonstrated as an iPhone app at technology conferences and later released as an iPhone
`
`3GS app in February 2010. In April 2010, Apple Inc. acquired Siri, Inc. In 2011, the Siri
`
`personal digital assistant was released as an integrated feature of the iPhone 4S.
`
`12.
`
`Speech-based navigation of electronic data sources has continued to be
`
`implemented as an effective and user-friendly solution for interacting with electronic
`
`devices.
`
`3
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 4 of 112 PageID #: 317
`
`13.
`
`On May 6, 2016, IPA acquired the SRI speech-based navigation patent
`
`portfolio. IPA is a wholly-owned subsidiary of WiLAN, a leading technology innovation
`
`and licensing business actively engaged in research, development, and licensing of new
`
`technologies.
`
`INVENTOR BACKGROUNDS
`
`14.
`
`Co-inventor Adam Cheyer is a recognized thought leader in the field of
`
`artificial intelligence. After obtaining his computer science degree from Brandeis
`
`University and his MS in Computer Science and Artificial intelligence (“AI”), Mr.
`
`Cheyer served as a researcher in Artificial Intelligence at SRI International. He authored
`
`more than 60 publications and 26 issued patents. He was Chief Architect of CALO, the
`
`largest AI project in US history. Previously, Adam was co-founder and VP Engineering
`
`of Siri, a mobile phone virtual personal assistant. As a startup, Siri won the Innovative
`
`Web Technologies award at SXSW, and was chosen as a Top Ten Emerging Technology
`
`by MIT’s Technology Review before Apple purchased Siri in 2010. He is currently co-
`
`founder and VP Engineering of Viv Labs, whose goal is to simplify the world by
`
`providing an intelligent interface to everything. Viv Labs is now a wholly-owned
`
`subsidiary of Samsung.
`
`15.
`
`Co-inventor Dr. Luc Julia is named one of the top 100 most influential
`
`French developers in the digital world. After receiving his Ph.D. in Multimodal Human-
`
`Computer Interfaces from the Ecole Nationale Superieure de Telecommunications in
`
`Paris, France, Dr. Julia worked at SRI, where he studied agent architectures, co-founded
`
`Nuance Communications (a world leader in speech recognition), and served as co-
`
`founder and director of the Computer Human Interactive Center (CHIC!). He was also
`
`4
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 5 of 112 PageID #: 318
`
`Chief Technologist at Hewlett-Packard Company, and Director of Siri at Apple, Inc. He
`
`now serves as VP of Innovation Strategy and Innovation Center at Samsung Electronics.
`
`16.
`
`Co-inventor Christine Halverson obtained her MS and Ph.D. in Cognitive
`
`Science while working at NASA’s Ames Research Center building next-generation air
`
`traffic control software. She worked for SRI as an Interim Program Director of SRI’s
`
`CHIC! Most recently she has served at IBM as a researcher at the Thomas J. Watson
`
`Research Center for a total of 16 years in the areas of human computer interaction, and
`
`the PERCS (Productive Easy-to-Use Reliable Computing System) program, which was
`
`part of a DARPA challenge in the High Performance Computing System (HPCS)
`
`mandate to develop a peta-scale computer.
`
`17.
`
`Co-inventor Dimitris Voutsas has a Masters in Computer Science and
`
`worked as a Research & Development Engineer at SRI’s CHIC! For the last twelve years
`
`he has served at Microsoft as a Project Manager for Windows and Windows Phone, and
`
`currently serves as Senior Program Manager for Microsoft’s Bing.
`
`18.
`
`Co-inventor David L. Martin worked as a Senior Computer Scientist at the
`
`Artificial Intelligence Center of SRI International for over 16 years, and worked as the
`
`Senior Manager for Applications Engineering at Siri Inc. and later as an Engineering
`
`Manager at Apple Inc. upon Apple’s acquisition of Siri. Since August 2013, he has
`
`served as the Senior Research Scientist at Nuance Communications, focusing on artificial
`
`intelligence research.
`
`5
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 6 of 112 PageID #: 319
`
`ASSERTED PATENTS
`
`U.S. Patent No. 6,742,021
`
`19.
`
`IPA is the owner by assignment of U.S. Patent No. 6,742,021 (the “’021
`
`Patent”). The ’021 Patent is entitled “Navigating Network-Based Electronic Information
`
`Using Spoken Input With Multimodal Error Feedback.” The ’021 Patent issued on May
`
`25, 2004 from U.S. Patent application no. 09/524,095, filed March 13, 2000. A true and
`
`correct copy of the ’021 Patent was previously attached to IPA’s First Amended
`
`Complaint at Exhibit A. (See Dkt. 12-2.)
`
`20.
`
`The ’021 Patent is a continuation-in-part of U.S. Patent Application
`
`09/225,198, and at Col. 1 lines 6-13, the ’021 Patent claims priority to and incorporates
`
`by reference the 09/225,198 application, as well as provisional application nos.
`
`60/124,718; 60/124,719; and 60/124,720.
`
`21.
`
`The ’021 Patent “relates generally to the navigation of electronic data by
`
`means of spoken natural language requests, and to feedback mechanisms and methods for
`
`resolving the errors and ambiguities that may be associated with such request.” ’021
`
`Patent at Col. 1, lines 22-26 (hereinafter, 1:22-26).
`
`22.
`
`The ‘021 patent claims priority to January 5, 1999. The technology
`
`disclosed and claimed in the ‘021 Patent was not well-understood, routine or
`
`conventional. To the contrary, the technology claimed in the ‘021 Patent was well ahead
`
`of the state of the art at the time of the invention. For example, Google was not
`
`incorporated until September 1998 and had 39 employees in 1999; at the time of the
`
`invention the primary means for public access to the internet were 56K dial-up modems;
`
`the iPhone was not launched until 8-9 years later (2007); the first device widely
`
`6
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 7 of 112 PageID #: 320
`
`acknowledged to be marketed as a “smartphone” was not announced until 2000 (Ericsson
`
`R380)—with a black and white display partially covered by a flip, and users could not
`
`install their own software on the device and its architecture did not envisage users
`
`downloading their own applications at that time, and it did not have WLAN, Bluetooth,
`
`GPS; the first mobile camera phone did not come to the USA until 2002 when Sprint
`
`offered the Sanyo SCP-5300; the first mobile device to offer email, texting, a web
`
`browser not released until 2003 (BlackBerry 6210); the first mobile phone with any text
`
`to speech capability was not released until late 2004 (Samsung MM-A700), and it did not
`
`have the ability for speaker independent voice recognition because it did not use a
`
`network—that was not introduced on mobile phones until November 2008 with
`
`Google’s voice recognition app for the iPhone; the first mobile phone marketed as a
`
`phone to watch TV video was not announced until November 2005 (Nokia N92); the first
`
`YouTube video uploaded April 2005; and, the first mobile phone with capacitive touch
`
`screen not announced until January 2007 (LG Prada).
`
`23.
`
`The “Background of Invention” section of the ‘021 Patent states that as
`
`“the universe of electronic data potentially available to users continues to expand, “there
`
`is a growing need for information navigation technology that allows relatively naïve users
`
`to navigate and access desired data by means of natural language input.” ‘021 Patent at
`
`1:20-26.
`
`24.
`
`For example, with the explosion of electronic content in important markets
`
`like home entertainment and mobile computing, the proliferation of high-bandwidth
`
`communications infrastructure enables delivery of movies and other interactive
`
`multimedia content. However, “for users to take full advantage of this content stream
`
`7
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 8 of 112 PageID #: 321
`
`ultimately requires interactive navigation of content databases in a manner that is too
`
`complex for user-friendly selection by means of a traditional remote-control clicker.”
`
`‘021 Patent at 1:28-36.
`
`25.
`
`Allowing users to utilize spoken natural language requests to access
`
`electronic data provides the benefit of “rapidly searching and accessing desired content”
`
`and “is an important objective” both for “successful consumer entertainment products,”
`
`that offer “a dizzying range of database content choices,” and “navigation of (and
`
`transaction with) relatively complex data warehouses,” when using “the Internet/Web or
`
`other networks for general information, multimedia content, or e-commerce
`
`transactions.” ‘021 Patent at 1:37-46.
`
`26.
`
`Then existing prior art “navigational systems for browsing electronic
`
`databases and data warehouses (search engines, menus, etc.) have been designed without
`
`navigation via spoken natural language as a specific goal,” and as a result the world was
`
`full of electronic data navigation systems that were not designed to be navigated with
`
`natural spoken commands, but assumed navigation with “text and mouse-click inputs (or
`
`in the case of TV remote controls, even less).” ‘021 Patent at 1:47-54.
`
`27.
`
`Prior art systems that simply recognized voice commands using an
`
`extremely limited vocabulary and grammar were insufficient, in part because such
`
`systems did not accept spoken inputs in a user-intuitive manner, and required users to
`
`learn highly specialized command languages or formats. ‘021 Patent at 1:54-64.
`
`28.
`
`For example, prior art systems tended to require users to speak “in terms
`
`of arbitrary navigation structures (e.g., hierarchical layers of menus, commands, etc.) that
`
`8
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 9 of 112 PageID #: 322
`
`are essentially artifacts reflecting constraints of the pre-existing text/click navigation
`
`system.” ‘021 Patent at 1:59-2:3.
`
`29.
`
`Moreover, the use of spoken natural language inputs for navigation of
`
`electronic data resources typically presented a variety or errors and ambiguities, such as
`
`garbled and unrecognized words, and under-constrained requests, that could not be
`
`resolved in a rapid, user-friendly, non-frustrating manner.
`
`30.
`
`In addition, solutions to the prior art’s limitations faced the problem that
`
`they needed to be compatible with the constraints imposed by multi-user, distributed
`
`environments such as the Internet and high-bandwidth content delivery networks,
`
`because a solution contemplating one-at-a-time user interaction at a single location would
`
`be insufficient.
`
`31.
`
`The disclosed inventions, on the other hand, achieve a fundamental
`
`technological advance to the state of the art of navigating network-based electronic
`
`information because it enables “users to speak directly in terms of what the user wants—
`
`e.g., ‘I’d like to see a Western film directed by Clint Eastwood’[.]” ‘021 Patent at 1:64-
`
`67.
`
`32.
`
`A further disclosed benefit of the inventions that improves the functioning
`
`of computer technology is that they can function as a voice interface on top (or on the
`
`front end) of a pre-existing non-voice navigational system, i.e., “a voice-driven front-end
`
`atop an existing, non-voice data navigation system, whereby users can interact by means
`
`of intuitive natural language input not strictly conforming to the step-by-step browsing
`
`architecture of the existing navigation system, and wherein any errors or ambiguities in
`
`user input are rapidly and conveniently resolved.” ‘021 Patent at 2:13-19; 10:10-38.
`
`9
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 10 of 112 PageID #: 323
`
`33.
`
`One aspect of the inventions disclosed and claimed in the ‘021 Patent
`
`relates to formulating a navigation query after the system has interpreted the spoken
`
`request. For example, if responding to a user’s interpreted request requires searching a
`
`structured relational database, an embodiment of the invention could construct an
`
`appropriate Structured Query Language (SQL) query to search the relevant database.
`
`See, e.g., ‘021 Patent at 8:55-9:25.
`
`34.
`
`The benefits of the inventions include not only increased convenience, but
`
`also increased efficiency and speed—and they achieve these benefits by fundamentally
`
`changing the manner in which a user interfaces and interacts with computer technology
`
`itself, as described in the following two examples:
`
`It will be apparent, in light of the above teachings, that preferred embodiments of
`the present invention can provide a spoken natural language interface atop an
`existing, non-voice data navigation system, whereby users can interact by means
`of intuitive natural language input not strictly conforming to the linear browsing
`architecture or other artifacts of an existing menu/text/click navigation system.
`For example, users of an appropriate embodiment of the present invention for a
`video-on-demand application can directly speak the natural request: “Show me
`the movie ‘Unforgiven’”—instead of walking step-by-step through a typically
`linear sequence of genre/title/actor/director menus, scrolling and selecting from
`potentially long lists on each menu, or instead of being forced to use an
`alphanumeric keyboard that cannot be as comfortable to hold or use as a
`lightweight remote control. Similarly, users of an appropriate embodiment of the
`present invention for a web-surfing application in accordance with the process
`shown in FIG. 5 can directly speak the natural request: “Show me a one-month
`price chart for Microsoft stock”—instead of potentially having to navigate to an
`appropriate web site, search for the right ticker symbol, enter/select the symbol,
`and specify display of the desired one-month price chart, each of those steps
`potentially involving manual navigation and data entry to one or more different
`interaction screens.
`
`‘021 Patent at 10:10-38.
`
`10
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 11 of 112 PageID #: 324
`
`35.
`
`As the title of the ‘021 Patent suggests, an important aspect of the
`
`inventions that improves computer technology itself is multi-modal error corrections and
`
`clarifications of the user’s spoken request when errors and ambiguities arise:
`
`Instead of simply rejecting such input and defaulting to traditional input modes or
`simply asking the user to try again, a preferred embodiment of the present
`invention seeks to converge rapidly toward instantiation of a valid navigational
`template by soliciting additional clarification from the user as necessary,…via
`multimodal input, i.e., by means of menu selection or other input modalities…in
`addition to spoken input.
`
`‘021 Patent at 2:49-58.
`
`36.
`
`The benefits of this multi-modal error correction/clarification are, as stated
`
`above, an accelerated instantiation of a valid navigational template, at least in part
`
`because the system is attempting new methods or means to obtain additional clarifying or
`
`necessary information that was not provided by a prior spoken request, and therefore
`
`avoids simply repeating a prior inquiry that was incomplete or otherwise erroneous. A
`
`further specified benefit is that “this clarifying, multi-modal dialogue takes advantage of
`
`whatever partial navigation information has been gleaned from the initial interpretation of
`
`the user’s spoken request.” ‘021 Patent at 2:49-58.
`
`37.
`
`The increased convenience, efficiency, accuracy, and speed improve the
`
`capacity of the navigation system as a whole. The improvements to the computer
`
`technology underlying the inventive spoken/natural language query for a database with
`
`multi-modal clarification versus prior art navigation systems are confirmed per the
`
`following example from the ‘021 Patent:
`
`Consider again the example in which the user of a video-on-demand
`application wishes to see “Unforgiven” but can only recall that it was
`directed by and starred Clint Eastwood. First, it bears noting that using a
`prior art navigational interface, such as a conventional menu interface, will
`
`11
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 12 of 112 PageID #: 325
`
`likely be relatively tedious in this case. The user can proceed through a
`sequence of menus, such as Genre (select “western”), Title (skip), Actor
`(“Clint Eastwood”), and Director (“Clint Eastwood”). In each case—
`especially for the last two items—the user would typically scroll and
`select from fairly long lists in order to enter his or her desired name, or
`perhaps use a relatively couch-unfriendly keypad to manually type the
`actor's name twice.
`
`Using a preferred embodiment of the present invention, the user instead
`speaks aloud, holding remote control microphone 102, “I want to see that
`movie starring and directed by Clint Eastwood. Can’t remember the title.”
`At step 402 the voice data is received. At step 404 the voice data is
`interpreted. At step 405 an appropriate online data source is selected (or
`perhaps the system is directly connected to a proprietary video-on-demand
`provider). At step 406 a query is automatically constructed by the query
`construction logic 330 specifying “Clint Eastwood” in both the actor and
`director fields. Step 407 detects no obvious problems, and so the query is
`electronically submitted and the data source is navigated at step 408,
`yielding a list of several records satisfying the query (e.g., “Unforgiven”,
`“True Crime”, “Absolute Power”, etc.). Step 409 detects that additional
`user input is needed to further refine the query in order to select a
`particular film for viewing.
`
`At that point, in step 412 query refinement logic 340 might preferably
`generate a display for client display device 112 showing the (relatively
`short) list of film titles that satisfy the user's stated constraints. The user
`can then preferably use a relatively convenient input modality, such as
`buttons on the remote control, to select the desired title from the menu. In
`a further preferred embodiment, the first title on the list is highlighted by
`default, so that the user can simply press an “OK” button to choose that
`selection.
`
`‘021 Patent at 11:24-62.
`
`38.
`
`The ‘021 Patent contains 8 independent claims, and a total of 132 claims,
`
`covering various methods, systems, and computer programs. Independent claim 1 is a
`
`method claim:
`
`1. A method for speech-based navigation of an electronic data source, the
`electronic data source being located at one or more network servers located
`remotely from a user, comprising the steps of:
`
`(a) receiving a spoken request for desired information from the user;
`
`12
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 13 of 112 PageID #: 326
`
`(b) rendering an interpretation of the spoken request;
`
`(c) constructing at least part of a navigation query based upon the interpretation;
`
`(d) soliciting additional input from the user, including user interaction in a non-
`spoken modality different than the original request without requiring the user to
`request said non-spoken modality;
`
`(e) refining the navigation query, based upon the additional input;
`
`(f) using the refined navigation query to select a portion of the electronic data
`source; and
`
`(g) transmitting the selected portion of the electronic data source from the
`network server to a client device of the user.
`
`39.
`
`The above- claimed speech-based navigation method relies on receiving a
`
`spoken request, performs multiple steps to interpret, construct, and refine a query of an
`
`electronic data source, and utilizes multi-modal functionality to obtain and use additional
`
`non-spoken input from a user without requiring the user to request said non-spoken
`
`modality, to transmit a portion of the electronic data source to a device of the user. Such
`
`claimed and disclosed navigation methods provide significant benefits and improvements
`
`to the capacity and underlying computer functionality over their prior art navigation
`
`methods—namely, increased speed, convenience, and efficiency in creating a proper
`
`query to search an electronic data source and providing information requested by a user,
`
`as well as a greater degree of freedom for users to use and the navigation system to
`
`accept and process an expanded set of intuitive inputs (e.g., natural language), rather than
`
`being limited solely to specialized command languages or formats that may require
`
`training or specialized knowledge to effectively use.
`
`13
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 14 of 112 PageID #: 327
`
`40.
`
`The above-disclosed and claimed speech-based navigation method
`
`additionally constitutes an unconventional technical solution (for example, multi-modal
`
`feedback to solicit additional user input, to refine and use an electronic data source query
`
`without requiring a user to request a non-spoken modality) to address a technical problem
`
`of electronic data source navigational methods to interpret, construct, query, and refine
`
`spoken requests.
`
`Patent Prosecution and Examination Generally
`
`41.
`
`Examiners at the United States Patent and Trademark Office (“USPTO”)
`
`review patent applications to determine whether a claimed invention should be granted a
`
`patent. In general, the most important task of a patent examiner is to review the technical
`
`information disclosed in a patent application and to compare it to the state of the art. This
`
`involves reading and understanding a patent application, and then searching the prior art
`
`to determine what technological contribution the application teaches the public. A patent
`
`is a reward for informing the public about specific technical details of a new invention.
`
`The work of a patent examiner includes searching prior patents, scientific literature
`
`databases, and other resources for prior art. Then, an examiner reviews the claims of the
`
`patent application substantively to determine whether each complies with the legal
`
`requirements for granting of a patent. A claimed invention must meet patentability
`
`requirements including statutory subject matter, novelty, inventive step or non-
`
`obviousness, industrial application (or utility) and sufficiency of disclosure, and
`
`examiners must apply federal laws (Title 35 of the United States Code), rules, judicial
`
`precedents, and guidance from agency administrators.
`
`42.
`
`To have signatory authority (either partial or full), Examiners must pass a
`
`test equivalent to the Patent Bar. All examiners must have a college degree in engineering
`
`14
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 15 of 112 PageID #: 328
`
`or science. Examiners are assigned to “Art Units,” typically groups of 8-15 Examiners in
`
`the same area of technology. Thus, by way of required background and work experience,
`
`Examiners have special knowledge and skill concerning the technologies examined by
`
`them and in their particular Art Unit.
`
`43.
`
`The basic steps of the examination consist of:
`
`• reviewing patent applications to determine if they comply with basic format,
`
`rules and legal requirements;
`
`• determining the scope of the invention claimed by the inventor;
`
`• searching for relevant technologies to compare similar prior inventions with the
`
`invention claimed in the patent application; and
`
`• communicating findings as to the patentability of an applicant's invention via a
`
`written action to inventors/patent practitioners.
`
`44.
`
`Communication of findings as to patentability are done by way of one or
`
`more Office Actions in which the Examiner accepts or rejects proposed claims filed by
`
`the applicant(s) and provides reasons for rejections. The applicant(s) are then permitted to
`
`file a Response to Office Action, in which claims may be amended to address issues
`
`raised by the Examiner, or the applicant states reasons why the Examiner’s findings are
`
`incorrect. If an applicant disagrees with a Final Rejection by an Examiner, the applicant
`
`may file an appeal with the Patent Trial and Appeal Board (“PTAB”). If, after this
`
`process, the USTPO determines that the application meets all requirements, a patent is
`
`duly allowed, and after an issue fee is paid, the patent is issued.
`
`45.
`
`A patent duly allowed and issued by the USTPO is presumptively valid
`
`and becomes the property of the inventor(s) or assignee(s).
`
`15
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 16 of 112 PageID #: 329
`
`46.
`
`A “Continuation Application” is one where, typically after allowance but
`
`in any event prior to issuance, the inventor applies for a second, related patent. A
`
`Continuation employs substantially the same invention disclosure as the previous,
`
`allowed application, but seeks new or different claims.
`
`Prosecution and Examination of the ‘021 Patent
`
`47.
`
`The examination of the ‘021 Patent required more than four years, from
`
`the date of the filing of the patent application on March 13, 2000, through the issue date
`
`of May 25, 2004.
`
`48.
`
`Four Patent Examiners were involved in examining the application that
`
`matured into the ‘021 Patent, namely, Assistant Examiner Firmin Backer and Supervisory
`
`Examiners Ayaz Sheikh, David Wiley, and James Trammel.
`
`49.
`
`Although the results of various patent examiner searches are not
`
`summarized, the prosecution history of the ‘021 Patent indicates that Assistant Examiner
`
`Backer conducted prior art and other searches on several USPTO databases, including the
`
`patent examiner systems Web-based Examiner Search Tool (“WEST”) at least on April
`
`6, 2001; November 21, 2001; April 28, 2002; November 20, 2002; and November 21,
`
`2002.
`
`50.
`
`Between the prior art references located by and cited by the Patent
`
`Examiners, and the references submitted by the applicants and considered by the Patent
`
`Examiners during the prosecution of the ‘021 Patent, at least 25 patent references and 20
`
`non-patent references were formally considered by the Patent Examiners, as indicated on
`
`the front two pages of the issued ‘021 Patent.
`
`51.
`
`On information and belief, it is the practice of the USPTO not to cite
`
`excessive cumulative art, in other words, in this instance, the art cited by the Patent
`
`16
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 17 of 112 PageID #: 330
`
`Examiners is representative of considerable other art located by the USPTO and not cited.
`
`Further on information and belief, it is the practice of the USPTO to discuss in its Office
`
`Actions those references of which the Patent Examiners are aware that most closely
`
`resemble the claimed inventions.
`
`52.
`
`During prosecution of the application that matured into the ‘021 Patent,
`
`the U.S. Patent Office issued a Notice of Allowability on December 16, 2002, for claims
`
`56-187 (i.e., issued claims 1-132), and in the “Examiner’s Reasons for Allowance,”
`
`stated, inter alia:
`
`Applicants teach an inventive concept for navigating network-based electronic
`
`data sources in response to spoken natural language input request..
`
`Notice of Allowability at 2. (Emphasis added.)
`
`53.
`
`In order for the claims of the ‘021 Patent to have issued, they needed to be
`
`patentably distinct from the at least 45 references formally identified and/or discussed
`
`during prosecution. That is, each of the claims, as a whole (e.g., methods, systems, and
`
`programs for speech-based electronic data source navigation that involve receiving a
`
`spoken request, interpreting that request, constructing, and refining a query of an
`
`electronic data source based on the interpretation and utilizing various aspects of multi-
`
`modal functionality to obtain and use additional non-spoken input from a user, to transmit
`
`a portion of the electronic data source to a device of the user) were found to be patentably
`
`distinct from these 45 formally identified references.
`
`54.
`
`The references cited during the examination of the ‘021 Patent all
`
`represent patentably distinct and in some instances prior art means or methods to navigate
`
`17
`
`
`
`Case 1:18-cv-00001-RGA-SRF Document 16 Filed 03/20/18 Page 18 of 112 PageID #: 331
`
`electronic data sources. By allowing the claims of the ‘021 Patent, each of the claims in
`
`the ‘021 Patent, as a whole was shown to be inventive, novel, and innovative.
`
`55.
`
`As each claim as a whole, of the ‘021 Patent is inventive, novel, and
`
`innovative, and each claim, as a whole, constitutes more than the application of well-
`
`understood, routine, and conventional activities.
`
`56.
`
`As of February 19, 2018, the ‘021