`
`Exhibit 1
`
`
`
`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 2 of 5 PageID# 180
`
`Exhibit 1
`
`’337 patent, Claim 4
`
`4[pre]. A voice-content control method, comprising:
`[a] calculating a distance between a user and a voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify the acquired voice as either one of a first voice and
`a second voice based on the distance between the user and the voice-content control
`device;
`[d] analyzing the acquired voice to execute processing intended by the user;
`[e] generating, based on content of the executed processing, output sentence that is text data
`for a voice to be output to the user; and
`[f] adjusting a sound volume of voice data obtained by converting the output sentence
`thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as the output sentence when the acquired voice has
`been classified as the first voice, and
`[2] a second output sentence is generated as the output sentence in which a part of
`information included in the first output sentence is omitted when the acquired voice has
`been classified as the second voice, wherein
`[h] at adjusting the sound volume of voice data, further adjusting the sound volume of voice
`data such that the sound volume of voice data obtained by converting the first output
`sentence thereinto differs from the sound volume of voice data obtained by converting the
`second output sentence thereinto.
`
`Legend
`
`Blue = classifying a voice based on proximity
`Violet = tailoring output based on that classification
`Grey = admittedly known steps
`
`1
`
`
`
`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 3 of 5 PageID# 181
`
`’337 patent, Claim 4
`
`’337 patent, Claims 1-3
`
`4[pre]. A voice-content control method,
`comprising:
`[a] calculating a distance between a user and a
`voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify
`the acquired voice as either one of a first
`voice and a second voice based on the
`distance between the user and the
`voice-content control device;
`[d] analyzing the acquired voice to execute
`processing intended by the user;
`[e] generating, based on content of the
`executed processing, output sentence that
`is text data for a voice to be output to the
`user; and
`[f] adjusting a sound volume of voice data
`obtained by converting the output
`sentence thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as
`the output sentence when the acquired
`voice has been classified as the first
`voice, and
`[2] a second output sentence is generated
`as the output sentence in which a part
`of information included in the first
`output sentence is omitted when the
`acquired voice has been classified as
`the second voice, wherein
`[h] at adjusting the sound volume of voice
`data, further adjusting the sound volume
`of voice data such that the sound volume
`of voice data obtained by converting the
`first output sentence thereinto differs from
`the sound volume of voice data obtained
`by converting the second output sentence
`thereinto.
`
`1[pre]. A voice-content control device,
`comprising:
`[a] a proximity sensor configured to calculate
`a distance between a user and the
`voice-content control device;
`[b] a voice classifying unit configured to
`analyze a voice spoken by a user and
`acquired by a voice acquiring unit to
`classify the voice as either one of a first
`voice or a second voice based on the
`distance between the user and the
`voice-content control device;
`[c] a process executing unit configured to
`analyze the voice acquired by the voice
`acquiring unit to execute processing
`required by the user;
`[d] a voice-content generating unit configured
`to generate, based on content of the
`processing executed by the process
`executing unit, output sentence that is text
`data for a voice to be output to the user;
`and
`[e] an output controller configured to adjust a
`sound volume of voice data obtained by
`converting the output sentence thereinto,
`wherein
`[f] the voice-content generating unit is further
`configured to
`[g] generate a first output sentence as the
`output sentence when the acquired voice
`has been classified as the first voice, and
`[h] generate a second output sentence in
`which information is omitted as compared
`to the first output sentence as the output
`sentence when the acquired voice has
`been classified as the second voice,
`wherein
`[i] the output controller is further configured
`to adjust the sound volume of voice data
`such that the sound volume of voice data
`obtained by converting the first output
`sentence thereinto differs from the sound
`volume of the voice data obtained by
`
`2
`
`
`
`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 4 of 5 PageID# 182
`
`converting the second output sentence
`thereinto.
`
`2[pre]. The voice-content control device
`according to claim 1, wherein
`[a] the process executing unit comprises:
`[b] an intention analyzing unit configured to
`extract intention information indicating an
`intention of the user based on the voice
`acquired by the voice acquiring unit; and
`[c] an acquisition content information
`acquiring unit configured to acquire
`acquisition content information which is
`notified to the user based on the extracted
`intention information, and
`[d] the voice-content generating unit is further
`configured to generate the text data
`including the acquisition content
`information as the output sentence.
`
`3. The voice-content control device according
`to claim 1, wherein, on generating the
`second sentence, the voice-content
`generating unit is further configured to
`omit a part of information included in the
`voice spoken by the user.
`
`’337 patent, Claim 4
`
`’337 patent, Claim 5
`
`4[pre]. A voice-content control method,
`comprising:
`[a] calculating a distance between a user and a
`voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify
`the acquired voice as either one of a first
`voice and a second voice based on the
`distance between the user and the
`voice-content control device;
`[d] analyzing the acquired voice to execute
`processing intended by the user;
`[e] generating, based on content of the
`executed processing, output sentence that
`is text data for a voice to be output to the
`user; and
`
`5[pre]. A non-transitory storage medium that
`stores a voice-content control program
`that causes a computer to execute:
`[a] calculating a distance between a user and a
`voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify
`the acquired voice as either one of a first
`voice and a second voice based on the
`distance between the user and the
`voice-content control device;
`[d] analyzing the acquired voice to execute
`processing intended by the user;
`[e] generating, based on content of the
`executed processing, output sentence that
`is text data for a voice to be output to the
`
`3
`
`
`
`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 5 of 5 PageID# 183
`
`[f] adjusting a sound volume of voice data
`obtained by converting the output
`sentence thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as
`the output sentence when the acquired
`voice has been classified as the first
`voice, and
`[2] a second output sentence is generated
`as the output sentence in which a part
`of information included in the first
`output sentence is omitted when the
`acquired voice has been classified as
`the second voice, wherein
`[h] at adjusting the sound volume of voice
`data, further adjusting the sound volume
`of voice data such that the sound volume
`of voice data obtained by converting the
`first output sentence thereinto differs from
`the sound volume of voice data obtained
`by converting the second output sentence
`thereinto.
`
`user; and
`[f] adjusting a sound volume of voice data
`obtained by converting the output
`sentence thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as
`the output sentence when the acquired
`voice has been classified as the first
`voice, and
`[2] a second output sentence is generated
`as the output sentence in which a part
`of information included in the first
`output sentence is omitted when the
`acquired voice has been classified as
`the second voice, wherein
`[h] at adjusting the sound volume of voice
`data, further adjusting the sound volume
`of voice data such that the sound volume
`of voice data obtained by converting the
`first output sentence thereinto differs from
`the sound volume of voice data obtained
`by converting the second output sentence
`thereinto.
`
`4
`
`