`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 1 of 21 PagelD# 89
`
`
`
`EXHIBIT 3
`EXHIBIT 3
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 2 of 21 PageID# 90
`ees mamEEE
`
`US011244675B2
`
`a2) United States Patent
`Naganuma
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 11,244,675 B2
`Feb. 8, 2022
`
`(54) WORD REPLACEMENTIN OUTPUT
`GENERATION FOR DETECTED INTENT BY
`VOICE CLASSIFICATION
`
`(71) Applicant: JVC KENWOOD Corporation,
`Yokohama (JP)
`
`(72)
`
`Inventor: Tatsumi Naganuma, Yokohama (JP)
`
`(56)
`
`(73) Assignee:
`
`IVWCKENWOODCorporation,
`Yokohama (JP)
`
`(*) Notice:
`
`Subject to any diselaimer, the termofthis
`patent is extended or adjusted under 35
`EL-BG, 1SHh) by 152 days.
`
`GOG6P 40/284; GIOL. 13/033; GIOL
`15/1822, GLOL 15/22; GOL 25/63; GLOL
`2015/226; G10L 2015/228; G1OL 17/00:
`GI10L 17/06
`USPC ou... 704/19, 10, 231, 246, 239, 270, 275,
`704/238
`See application file for complete search history.
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,852,804 A
`(0.341.825 B2*
`
`12/1998 Sako
`7/2019 Dowlatkhah ...,....... GLOL 13/08
`won
`oe
`|
`(Continued)
`FOREIGN PATENT DOCUMENTS
`
`(21) Appl. No.: 16/295,034
`
`(22)
`
`Filed:
`
`Mar. 7, 2019
`
`(65)
`
`Prior Publication Data
`US 2019/0279631 Al
`Sep. 12, 2019
`
`(30)
`
`Foreign Application Priority Data
`
`Mar. 12. 2018
`
`(JP) viccceseeseseseeeeee JP2018-044598
`
`(51)
`
`(2013.01)
`(2020.01)
`(2006.01)
`(2013.01)
`(2006.01)
`(2013.01)
`
`Int. CL
`GIOL 17/06
`GO06F40/247
`GIOL 15/22
`GIOL 1518
`GO6F 3/16
`GIOL 25/351
`(52) U.S. CL
`CPC cacccueeee GIOL 15/22 (2013.01); GO6F 3/167
`(2013.01); GIOL 15/1815 (2013.01); GI0L
`25/5] (2013.01); GIOL 2015/223 (2013.01)
`(58) Field of Classification Search
`CPC .... GO6E 40/237; GO6E40/247: GO6E 40/279:
`
`Jp
`Wo
`
`04-204700
`2016/136062
`
`7/1992
`9/2016
`
`Primary Examiner — Martin Lerner
`(74) Attorney, Agent, or Firm — Amin, Turocy & Watson.
`LLP
`ABSTRACT
`(57)
`An oulpul-content control device includes a voice classify-
`ing unit configured to analyze a voice spoken by a user and
`acquired by a voice acquiring unit to determine whether the
`voice is a predetermined voice: an intention analyzing unit
`configured to analyze the voice acquired by the voice
`acquiring unit
`to detect
`intention information indicating
`what kind of information is wished to be acquired by the
`user; 4 notification-information acquiring unit configured to
`acquire notification information to be notified to the user
`based on the intention information; and an oulput-content
`generating unit configured to generate an output sentence as
`sentence data to be output to the user based onthe notifi-
`cation information and also configured to generate the
`Output sentence in whichat least one word selected among
`words included in the notification information is replaced
`with another word when the voice 1s determined to be the
`predetermined voice.
`7 Claims, 6 Drawing Sheets
`
`AGQUIRE INPUT VOICE
`
`510
`
`
`
`
`‘GENERATE TEXT DATA BY
`ANALYZING INPUT VOIQE
`
`
`
`
`GENERATE SECOND
`OUTPUT SENTENCE
`
`
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 3 of 21 PageID# 91
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 3 of 21 PagelD# 91
`
`US 11,244,675 B2
`
`Page 2
`
`(56)
`
`References Cited
`U.S, PATENT DOCUMENTS
`
`2010184721 AL*
`
`2009/0234639 AL*
`
`10,706,846 BI*
`7/2020) Barton ..........05 GIOL 15/1815
`10,733,989 B2*
`« GIOL 15/22
`8/2020 Yehuday...
`
`2001/0041977 AL 11/2001 Aoyagi... GLOL 15/22
`704/246
`9/2009 “Teague i. GOGPF 40/56
`704/270.1
`7/2011 Subramanian ....,. GILOL 19/0018
`704/4
`2014/0309999 AL [0/2014 Basson ......6...04.4, GLOL 25/00
`704/270
`2014/0330560 AL 11/2014 Venkatesha ....u.u. GIOL 15/22
`704/275
`2015/0331665 AL* LL2005 Ishii woesGIOL 15/30
`T1S/728
`2015/0332674 AL*® LL 2018 Nishino cece GLOL 17/00
`704/246
`2017/0025121 Al”
`1/2017 Tang ....,
`« GIOL 17/06
`
`2017/0256252 AL*
`9/2017 Christian ..
`. GIOL 13/027
`
`2017/0337921 Al™*
`11/2017 Aoyama ...
`wo GLOL 15/26
`
`2017/0351487 Al* 12/2017 Vaquero ....
`«» GLOL 17/00
`w GIOL 17/26
`12/2017) Endo...
`2017/0364310 AL*
`
`12/2017 Schuster
`oy GIOL 25/24
`2017/0366662 AL”
`oo GOGP 3/OLL
`2/2018 Filew ....,
`2018/0047201 AL*®
`
`vw GOGF 15/16
`2OLS/OL1I4159 AL*
`4/2018
`. GOGF 40/247
`20190103127 AL*
`4/2019
`
`wore GLOL, 15/22
`2O19/0LLS008 AL*
`4/2019
`
`. GOGF 40/253
`20190121842 AL*
`4/2019
`9/2019 Naganuma ....
`« GIOL 15/22
`2019/0279611 Al*
`
`9/2019 Naganuma ....
`ws. GLOL 15/22
`2019/0279631 AL"
`1/2020 SaitO occ GIOL 15/22
`202Q/001340L AL*
`
`* cited by examiner
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 4 of 21 PageID# 92
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 4 of 21 PagelD# 92
`
`U.S. Patent
`
`Feb. 8, 2022
`
`Sheet 1 of 6
`
`US 11,244,675 B2
`
`FIG.1
`
`
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 5 of 21 PageID# 93
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 5 of 21 PagelD# 93
`
`U.S. Patent
`
`Feb. 8, 2022
`
`Sheet 2 of 6
`
`US 11,244,675 B2
`
`OUTPUT-CONTENT
`CONTROLLER
`
`VOICE
`DETECTING
`UNIT
`
`34
`
`INTENTION ANALYZING
`UNIT
`
`NOTIFICATION-
`
`BNPEQUISING
`
`OUTPUT CONTROLLER
`
`ATTRIBUTE-
`INFORMATION=||
`ACQUIRING UNIT||
`
`[
`
` ACQUISITION-
`INFORMATION
`ACQUIRING UNIT
`
`OQUTPUT-CONTENT
`GENERATING UNIT
`60
`FIRST-OUTPUT-
`CONTENT
`GENERATING UNIT||
`62
`
`PROCESSOR
`
`SECOND-OUTPUT-
`CONTENT
`GENERATING UNIT
`
`44
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 6 of 21 PageID# 94
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 6 of 21 PagelD# 94
`
`U.S. Patent
`
`Feb. 8, 2022
`
`Sheet 3 of 6
`
`US 11,244,675 B2
`
`FIG.3
`
`INTENTION
`INFORMATION|
`
`SCHEDULE
`
`FIG.4
`
`
`
`(ATTRIBUTE TYPE|PERSON DATE
`
`INFORMATION EO ATTRIBUTE
`
`CONTENT
`INFORMATION E1
`
`MR. YAMADA
`
`MARCH 20, 2020
`
`Sian |LOCATION
`
`INFORMATION?AG |
`ACQUISITION
`TOKYO BUILDING
`ONTENT
`
`TIME
`
`ieofCLOCK
`
`INFORMATION Ai
`
`EATTQ
`
`||MEETING
`
`PERSON
`
`MR.
`YOSHIDA
`
`FIG.6
`
`REPLACED wa | HOSPITAL VISIT|MEETING
`
`WORD TO BE
`
`MEETING
`
`DINNER
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 7 of 21 PageID# 95
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 7 of 21 PagelD# 95
`
`U.S. Patent
`
`Feb. 8, 2022
`
`Sheet 4 of 6
`
`US 11,244,675 B2
`
`(stant)
`—— =
`
`s
`GENERATE TEXT DATA BY
`ANALYZING INPUT VOICE
`
`12
`
`$22
`CLASSIFY INPUT
`VOICE
`
`|
`
`
`|
`DETECT INTENTION
`
`INFORMATION BASED ON TEXT |
`en,BALA
`
`— i
`
`DERIVEATTRIBUTE_—=éR
`INFORMATION BASED ON
`sf.
`INTENTION INFORMATION
`
`
`
`
`_|
`
`ACQUIRE ACQUISITION
`INFORMATION BASED ON
`INTENTION INFORMATION AND
`
` $28
`
`$30
`J
`OUTPUT OUTPUT SENTENCE
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 8 of 21 PageID# 96
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 8 of 21 PagelD# 96
`
`U.S. Patent
`
`Feb. 8, 2022
`
`Sheet 5 of 6
`
`US 11,244,675 B2
`
`Wel
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 9 of 21 PageID# 97
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 9 of 21 PagelD# 97
`
`U.S. Patent
`
`Feb. 8
`
`, 2022
`
`Sheet 6 of 6
`
`US 11,244,675 B2
`
`NOLLVINROSNI-NOILVOIILON LINQ
`ONILWYSNSOINSLNOO-LAALNO||ietecthetidlhedted
`
`
`
`LZL?"LINNONRINOOVLINNShhoaas
`ff
`NOLLVWXHOSNISINGILLY
`LINDONIHINDOY>aIALNOADION
`eereeesEf=LINN
`
`
`004-7
`
`Wh
`
`LNALNOS-LAdLNO
`
`YATIOULNOOD
`
`
`
`ONIAAISSVW'TDSSIOA
`
`LINA
`
`|
`
`
`
`|S0IASCSASNOdS3y
`
`ONLLHOM|
`
`JUIN
`
`
`RUSSoeLINSLNOS-LAdLNO-LSMis
`sa90ud|LINDONILWYSNSS
`
`a7
`
`INSLNOSLidLNO-GNOOSS
`
`LINDONLLWHANSS
`
`
`
`
`
`LINDONTHINOOYJOVHOLS
`
`
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 10 of 21 PageID# 98
`Case 3:24-cv-00540-MHL Document1-3 Filed 07/25/24 Page 10 of 21 PagelD# 98
`
`US 11,244,675 B2
`
`1
`WORD REPLACEMENT IN OUTPUT
`GENERATION FOR DETECTED INTENTBY
`VOICE CLASSIFICATION
`
`CROSS-REFERENCE TO RELATED
`APPLICATION
`
`This application claims priority from Japanese Applica-
`tion No, 2018-044598,filed on Mar. 12, 2018, the contents
`of which are incorporated by reference hereinin tts entirety.
`
`PIELD
`
`The present application relates to an output-content con-
`trol device, an output-content control method, and a non-
`transitory storage medium.
`
`BACKGROUND
`
`For example, as described in Japanese [’xamined Patent
`Application Publication No, HO7-109560, voice control
`devices that analyze detected voice of a user and perform a
`processing according to an intention ofthe user have been
`disclosed. Moreover, voice control devices that output not-
`fication indicating that a processing intended by a user has
`been performed in voice, or that output an inquiry from a
`user in voice have also been disclosed.
`When a voice processing device that outputs voice is
`used, there is a case in which a notification from the voice
`control] device in response to an inquiry ofa user is heard by
`people other than the user therearound. In this case, even
`when it is wished not to be known by people other than the
`user about the notification from the voice control device,it
`can be knownby people otherthan the user. Therefore, it has
`been desired to make a contentof notification in response to
`an inquiry of a user diflicult to be understood by people other
`than the user when the content ofnotification is output.
`
`SUMMARY
`
`An output-content control device, an output-content con-
`trol] method, and a non-transitory storage medium are dis-
`closed.
`
`wn
`
`15
`
`20)
`
`a0
`
`40
`
`2
`information to be notified to the user based onthe intention
`information; and generating an output sentence as senience
`uta to be output
`to the user based on the notification
`information, wherein the generating further includes gener-
`aling the output sentence in which al least one word selected
`among words included in the notification information is
`replaced with another word whenthe voice is determined to
`be the predetermined voice.
`According to one aspect, there is provided a non-transi-
`tory storage mediumthat stores an output-content control
`program that causes a computer to execute; analyzing an
`acquired voice spoken by a user to determine whether the
`voice is a predetermined voice; analyzing the acquired voice
`to detect
`intention information indicating what kind of
`information is wished to be acquired by the user: acquiring
`notification information to be notified to the user based on
`the intention information; and generating an oulpul sentence
`as sentence data to be output to the user based on the
`notification information, wherein the generating further
`includes generating the output sentence in which at least one
`word selected among words included in the notification
`information is replaced with another word when the voice is
`determined to be the predetermined voice.
`The above and other objects, features, advantages and
`technical and industrial significance of this application wil
`be better understood by reading the following detailed
`description of presently preferred embodiments ofthe appli-
`cation, when considered in connection with the accompa-
`nying, drawings.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. Lis a schematic diagramof an output-contentcontro!
`device according to a first embodiment;
`PIG, 2 is a schematic block diagram of the output-content
`control device according to the first embodiment:
`FIG, 3 is a table showing an example ofintention infor-
`mation:
`FIG. 4 is a table showing an example ofattribute infor-
`mation:
`FIG, 5 ts a table showing an example of acquisition
`information;
`FIG, 6 is a table showing an example ofrelationship
`information;
`flow of an output
`FIG. 7 is a flowchart showing a
`processing of output sentence according to the first embodi-
`ment,
`FIG, 8 is a schematic diagram showing another example
`of the oulpul-content control device according tothe first
`embodiment; and
`FIG. 9 is a schematic block diagram ofan information
`output system according to a second embodiment.
`
`DETAILED DESCRIPTION OF ‘THE
`PREFERRED EMBODIMENTS
`
`Embodiments ofthe present application are explained in
`detail below with reference to the drawings. The embodi-
`ments explained below are not intended to limit the present
`application,
`
`First Embodiment
`
`is a sche-
`First. a first embodiment is explained. FIG. 1
`matic diagram ofan output-content control device according
`to the first embodiment. As shown in FIG. 1, an output-
`content control device 1 according to the first embodiment
`
`there is provided an output-
`According to one aspect,
`content contro! device comprising: a voice classifying unit
`configured to analyze a voice spoken by a user and acquired
`by a voice acquiring unit to determine whether the voiceis
`a predetermined voice; an intention analyzing unit config-
`ured to analyze the voice acquired by the voice acquiring
`unit to detect intention informationindicating what kind of
`information is wished to be acquired by the user: a notifi-
`calion-information acquiring unit configured to acquire nou-
`fication information to be notified to the user based on the
`intention information; and an output-content generating, unit
`configured to generate an oulput sentence as sentence data to;
`be output to the user based on the notification information,
`wherein the output-content generating unit is further con-
`figured to generate the output sentence in whichat least one
`word selected among words included in the notification
`informationis replaced with another word when the voiceis
`determined to be the predetermined voice.
`According to one aspect.
`there is provided an output-
`content control method comprising: analyzing an acquired
`voice spoken by a user to determine whether the voice is a
`predetermined voice; analyzing the acquired voice to detect
`Intention information indicating what kind of informationts
`wished to be acquired by the user; acquiring notification
`
`45
`
`a
`
`bu
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 11 of 21 PageID# 99
`Case 3:24-cv-00540-MHL Document1-3 Filed 07/25/24 Page 11 of 21 PagelD# 99
`
`US 11,244,675 B2
`
`3
`detects a voice V1 spoken by a user H by a voice detecting
`unit 10, analyzes the detected voice V1 to perform a pre-
`determined processing, and outputs a voice V2 by a voice
`output unit 12. Although the voice V2 is output toward the
`user H, when other people are present around the outpul-
`content control device 1, the voice V2 can be heard by those
`people. In this case, evenif the voice V2 includes informa-
`tion that the user H wishes not to be known by other people,
`there is a risk that the voice V2 is heard by people other than
`the user H and the information is learned by them. The
`output-content control device 1 according to the present
`embodiment analyzes the voice VI and adjusts sentences
`and the like to be outputas the voice V2, and thereby enables
`to make contents of the voice V2 be understood appropri-
`ately only bythe user H. and hardly by the people other than
`the user ET.
`
`FIG, 2 1s a schematic block diagram ofthe output-content
`control device according to the first embodiment. As shown
`in FIG. 2, the output-content control device 1 includes the
`voice detecting unit 10, the voice output unit 12. a lighting
`unit 14, a controller 16, a communication unit 18, and a
`storage 20, The output-content control device 1 is a so-called
`smart speaker (artificial intelligence (Al) speaker), but 1s not
`limited thereto as long as the device has functions described
`later. The output-content control device 1 can be,
`for
`example, a smart phone. a tablet, and the like.
`The voice detecting unit 10 is a microphone and detects
`the voice V1 spoken by the user H. The user H speaks the
`voice V1 toward the voice detecting unit 10 so as ta include
`information about a processing wished to be perlonned by
`the output-content control device 1. The voice detecting unit
`10 can be regarded as an input unit that accepts information
`input externally. The input unit can include a function other
`than the voice detecting unit 10 and, for example, a switch
`to adjust volume ofthe voice V2 by operation performed by
`the user H, and the like can be provided. The voice output
`unit 12 is a speaker, and outputs sentences (output sentences
`deseribed later) generated by the controller 16 as the voice
`V2. The lighting unit 14 is a light source, such as a light
`emitting diode (LED), and is turned on by a control ofthe
`controller 16. The communication unit 18 is a mechanismto
`communicate with external servers, such as a Wi-Fi (regis-
`tered trade mark) module and an antenna, and communicates
`information with an external server not shown under control
`of the controller 16. The communication unit 18 performs
`communication of information with external servers by
`wireless communication such as Wi-Fi, but the communi-
`cation ofinformation with external servers can be performed
`also by wired communication by cables connected. The
`storage 20 is a memory that stores information onarithmetic
`calculation of the controller 16 or programs, and includes,
`for example, at
`least one of a random access memory
`(RAM). a read-only memory (ROM), and an external stor-
`age device, such as a flash memory.
`The controller 16 is an arithmetic unit, namely, a central
`processor (CPU). The controller 16 includes a voice acquir-
`ing unit 30, a voice analyzing unit 32, an intention analyzing
`unit 34, a notification-information acquiring unit 36, a
`processor 38, an output-content generating unit
`(voice-
`content penerating unit) 40, a voice classifying unit 42, and
`an output controller 44. The voice acquiring unit 30, the
`voice analyzing unit 32, the intention analyzing unit 34, the
`notification-information acquiring unit 36, the processor 38,
`the output-content generating unit 40, the voice classifying
`unit 42, and the output controller 44 perform processes
`described later by reading sofiware/program stored in the
`storage 20.
`
`wn
`
`15
`
`na
`
`a
`
`40
`
`45
`
`ra
`
`ol
`
`4
`The voice acquiring unit 30 acquires the voice V1 that is
`detected by the voice detecting unit 10. The voice analyzing
`unit 32 performs voice analysis of the voice V1 acquired by
`the voice acquiring unit 30, to convert the voice V1 into text
`data. The text data is character data/text dala that includes a
`sentence spoken as the voice V1. The voice analyzing unit
`32 detects, for example, amplitude wavelorm/speech wave-
`form per time from the voice V1, The voice analyzing unit
`32 then replaces the amplitude wavelorm per time with a
`character based on a table in whicha relationship between
`the amplitude waveforms and the characters 1s stored,
`thereby converting the voice V1 into text data. Note that the
`converting method can be arbitrarily chosen as long as it
`enables to convert the voice V1 into text data.
`
`The intention analyzing unit 34 acquires the text data that
`is generated by the voice analyzing unit 32, and detects
`intention information | based on the text data. The intention
`
`information | is information indicating an intention ofthe
`user H, namely, an intent.
`In other words,
`the tntention
`information I
`is information that
`indicates what kind of
`processing is intended by the user H to be performed on the
`output-content control device 1, and is information that
`indicates what kinds of information the user H wishes to
`
`obtain in the present embodiment.
`The intention analyzing unit 34 extracis the intention
`information | from the text data by using, for example, a
`natural language processing. In the present embodiment, the
`intention analyzing unit 34 detects the intention information
`I from the text data based on multiple pieces oftraining data
`stored in the storage 20. The training data herein is data in
`whichthe intention information | has been assigned to text
`data in advance. That is, the intention analyzing unit 34
`extracts the training data that is similar to the text data
`generated by the voice analyzing, unit 32, and regards the
`intention information J of the extracted training data as the
`intention information | of the text data generated by the
`voice analyzing unit 32. Note that the training data is not
`necessarily required to be stored in the storage 20, and the
`intention analyzing unit 34 can search for the training data
`in an external server by controlling the communication unit
`18. As long as the intention analyzing unit 34 extracts the
`intention information | fromtext data, the extracting method
`ofthe intention information | can be arbitrarily chosen. For
`example, the intention analyzing unit 34 can read a rela-
`hionship table of keywords and the intention information |
`stored in the storage 20, and can extract
`the intention
`information | that is associated with the keyword when the
`keyword in the relationship table is included in text data.
`FIG. 3 is a table showing an example ofthe intention
`information. For example, when text data is a sentence
`“today’s schedule is”, the intention analyzing unit 34 rec-
`ognizes that a processing of informing about a schedule to
`the user H corresponds to the processing requested by the
`user I], that is, the intention information [, by performing
`analysis as described above. Thatis, the intention analyzing
`unit 34 detects that the information that the user wishes to
`acquire. that is. the intention informationI, is a schedule,
`The detecting method ofthe intention information| using
`texi data can be arbitrarily chosen, not limited thereto. For
`example, the output-content control device 1 can be contig-
`ured to store relationship table ofkeywords and the intention
`information |
`in the storage 20, and to detect the intention
`information | associated with the keyword when the key-
`wordis included in text data of the voice V1 spoken by the
`user H, As an example of this case, a keyword “konnichiwa”™
`may be associated with weather information and information
`of news. In this case, when the user H speaks the voice V1
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 12 of 21 PageID# 100
`Case 3:24-cv-00540-MHL Document1-3 Filed 07/25/24 Page 12 of 21 PagelD# 100
`
`US 11,244,675 B2
`
`5
`
`“konnichiwa”, the intention analyzing unit 34 detects the
`weather mformation and information of news as the inten-
`tion information [.
`
`The notification-information acquiring unil 36 acquires
`notification information that is a content af information to be
`givento the user H based onthe intention information L. As
`shownin FIG, 2, the notification-information acquiring unit
`36 includes an attribute-information acquiring unit 50 that
`acquires attribute information E, and an acquisition-infor-
`mation acquiring unit 52 that acquires acquisition informa-
`tion A. The notification informationis information including
`the attribute information F and the acquisition information
`A.
`
`The attribute-information acquiring unit 50 acquires the
`attribute information E based onthe intention information I.
`The attribute information E is information that is associated
`with the intention information |, and is information that
`indicates a condition necessary for acquiring information
`that
`the user H wishes to acquire. Namely,
`the attribute
`information EF
`is an entity. Por example, even if it
`is
`determined that the intention information | is a schedule, the
`output-content cantrol device 1 cannot determine which and
`whose schedule to notify when a conditionto further specify
`the intention information | are unknown,
`In this case, the
`output-content contro! device 1 cannot provide notification
`according to the intention of the user H. For this,
`the
`attnibute-information acquiring unit $0 acquires the attribute
`information E as a condition to further specily the intention
`information I
`ta enable to determine which and whase
`schedule to notify.
`FIG. 4 is a table showing an example ofthe attribute
`information. The attribute information E includes attribute
`type information EO and attribute content information EI.
`The attribute type information E0 is information indicating
`types of condition, that is, what kinds of conditions they are,
`and in other words, it is information in which conditions to
`further specify the intention information| are classified, The
`attribute content information [1 is a content ofthe attribute
`type information E0. Therefore, the attribute type informa-
`tion £0 can be regarded as inlormationindicating the types
`ofthe attribute content information El. Therefore, the attri-
`bute type information E0and the attribute content informa-
`tion El are associated with each other. As shownin FIG, 4.
`
`wn
`
`0
`
`5
`
`|
`2
`
`ba a
`
`at
`
`35
`
`40
`
`6
`that is associated with the intention information I coincident
`with the intention information I detected by the intention
`analyzing unit 34. For example, when the intention infor-
`mation | is a schedule, a person and a date are included in
`the relationship table as the attribute type information EO
`associated with a schedule.
`In this case,
`the attribute-
`information acquiring unit 50 extracts two pieces ofinfor-
`mation of a person and of a date as the attribute type
`information E0. As above, two pieces of the attribute type
`information £0 that correspond to one piece of the intention
`information | are present in this example, but the number of
`pieces ofthe attribute type information EO corresponding to
`one piece of the intention information | may be different
`depending on the content ofthe intention informationJ. That
`is, the numberofpieces ofthe atiribute type information E0
`corresponding to one piece of the intention information | can
`be one, or three or more. Moreover, the intention analyzing
`unit 34 reads the relationship table fromthe storage 20), but
`a source ofthe relationship table to be read from can be any
`source and,
`for example.
`the relationship table can be
`acquired froman external server by communicating with the
`external server/external device not shown through the com-
`munication unit 18.
`Having acquired the attribute type information EQ, the
`attribute-information acquiring unit 50 sets the attribute
`content information £1 for each ofthe attribute type inlor-
`mation [0. The attribute-information acquiring unit 50
`extracts, for example, the attribute content information E1
`from the text data generated by the voice analyzing unit 32.
`For example, when a keyword “today”ts included in the text
`data generated from the voice V1,
`the attribute conient
`information E1 corresponding to the attribute type informa-
`tion E0 ofa date is set to today’s date (“Mar. 20, 2020” in
`the example of FIG. 4), Furthermore, the attribute-informa-
`tion acquiring unit 50 canset the attribute content informa-
`tion E1 correspondingto the attribute type information E0 in
`advance. In this case, when the intention information| is a
`schedule. for example, setting data indicating that the attri-
`bute content
`information El
`is a content determined in
`advance is stored in the storage 20, That is, for example, it
`is slored in advancethatthe attribute content information E1
`corresponding to the attribute type information E0 of a
`person is “Mr, Yamada” in the storage 20. By this, the
`attribute-information acquiring unit 50 can setthe attribute
`content information E1 of a person even when a keyword
`representing a person is not
`included in the text data,
`Moreover, the attribute-mformation acquiring unit 50 can set
`the attribute content information 21 by communicating with
`an external server by the communication unit 18. Por
`example, when oneofthe attribute type information E0 is a
`location,
`the intention analyzing unit 34 can acquire a
`current posilion using a global positioning system (GPS) by
`communication, and can set
`it as the attribute conient
`information E.1. Moreover. the output-content control device
`1 can output a notification to prompt the user H to provide
`information of the attribute content information E1. Inthis
`case, for example. the attribute-information acquiring unit
`50 selects the attribute type information EQ for which
`acquisition of the attribute content
`information E1 is
`required, and causes the oulput-content generating unit 40 to
`generate a sentence to request the user Hto give the attribute
`content information £1 for the output-content generating
`unit 40. For example,
`in the case of the attribute type
`information E0 of a date, the oulpui-content generating unit
`5 40) generates a sentence “Please provide a date for which the
`schedule is wished to be notified”, or the like. Subsequently,
`the output controller 44 causes the voice output unit 12 to
`
`45
`
`for example, when the attribute type information EO
`includes “person” as one of the types of conditions,
`the
`attribute content information LI associated therewithis to be
`information specifying, a name of the person (in this
`example, “Mr. Yamada”). Furthermore, as the example
`shown in FIG. 4, when the attribute type information E0
`includes “date” as one of the types of conditions,
`the
`attribute content information E1 associated therewithis to be
`information indicating a date (in this example, Mar. 20,
`2020). By thus setting the attribute content information F1,
`it becomes certain that, for example, the schedule of Mr.
`Yamada on Mar. 20, 2020 should be notified. In the example 55
`of the present embodiment, Mr. Yamada is the user H
`himself,
`
`i
`
`the attribute-information
`In the present embodiment,
`acquiring unit 50 detects the attribute type information 0
`from the extracted intention information I. The attribute-
`information acquiring unit 50 reads a relationship table of
`the intention informationI and the attribute type information
`EO stored in the storage 20, and detects the intention
`information | that coincides with the intention information |
`
`ol
`
`detected by the intention analyzing unit 34 from the rela-
`Uionship table. The attribute-information acquiring unit 50
`then extracts and acquires the attribute type information E0
`
`
`
`Case 3:24-cv-00540-MHL Document 1-3 Filed 07/25/24 Page 13 of 21 PageID# 101
`Case 3:24-cv-00540-MHL Document1-3 Filed 07/25/24 Page 13 of 21 PagelD# 101
`
`US 11,244,675 B2
`
`t
`
`5
`
`|
`2
`
`at
`
`7
`output this sentence, Thus, the user H speaks, for example.
`a voice indicating that the date is today, and the voice is
`analyzed by the voice analyzing unit 32, and the atiribute-
`information acquiring unit 50 acquires informationindicat-
`ing that one of the attribute content
`information EL ts
`“today”.
`FIG. 5 is a table showing an example of acquisition
`information. The acquisition-information acquiring unit 52
`shown in FIG. 2 acquires the acquisition information A
`based onthe intention information I and the attributeinfor-
`mation E. The acquisition information A is information
`according to an intention of the user H and is, in other words,
`informationthat the user 1] wishes to acquire. The acquisi-
`tion information A includes the acquisition type information
`AO and acquisition content information Al. The acquisition
`type information AQ is information that indicates what kind
`of information the user H wishes to acquire and is, in other
`words, information in which information desired by the user
`H is classified. The acquisition content
`information Al
`indicates a content of the acquisition type information AQ.
`That is, the acquisition type information AO can be regarded
`as information indicating the type ofthe acquisition content
`information Al. Therefore, the acquisition type information
`AO and the acquisition content information Al are associ-
`ated with each other. As shown in FIG. 5, for example. when
`the acquisition type information AO includes “location”as
`one the type of informationthat the user [1 wishes to acquire,
`the acquisition content information Al associated therewith
`is to be information indicating the location (“Tokyo build-
`ing” in the example of FIG, 5).
`In the present embodiment, the acquisition-information
`acquiring unit 52 detects the acquisition type information AO
`from the intention information I. The acquisition-informa-
`tion acquiring unit 52 reads a relationship table of the
`intention information | and the acquisition type information
`AQ stored in the storage 20, and detects the intention
`information J that coincides with the intention informationI
`detected by the intention analyzing unit 34 from the rela-
`tionship table. The acquisition-information acquiring unit 52
`then extracts and acquires the acquisition type information
`AOthat is associated with the intention information I that
`coincides with the intention information I detected by the
`intention analyzing unit 34 fromthe relationship table. For
`example, when the intention information | is a schedule, the
`relationshiptable includes a location, a time, what to do, and
`a person as the acquisition type information AO associated
`with a schedule.
`In this case,
`the acquisition-information
`acquiring unit 52 extracts four pieces of information, a
`location, a time, what to do, and a person as the acquisition
`7
`type information AQ). As above, four pieces ofthe acquisition 50
`type information AO that correspond to one piece of the
`intention information | are present in this example, but the
`number of the acquisition type information AQ correspond-
`ing to one piece of the intention information I may be
`different depending on the content of the intention informa-
`tion I. That is, the number of pie