throbber
Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 1 of 5 PageID# 179
`
`Exhibit 1
`
`

`

`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 2 of 5 PageID# 180
`
`Exhibit 1
`
`’337 patent, Claim 4
`
`4[pre]. A voice-content control method, comprising:
`[a] calculating a distance between a user and a voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify the acquired voice as either one of a first voice and
`a second voice based on the distance between the user and the voice-content control
`device;
`[d] analyzing the acquired voice to execute processing intended by the user;
`[e] generating, based on content of the executed processing, output sentence that is text data
`for a voice to be output to the user; and
`[f] adjusting a sound volume of voice data obtained by converting the output sentence
`thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as the output sentence when the acquired voice has
`been classified as the first voice, and
`[2] a second output sentence is generated as the output sentence in which a part of
`information included in the first output sentence is omitted when the acquired voice has
`been classified as the second voice, wherein
`[h] at adjusting the sound volume of voice data, further adjusting the sound volume of voice
`data such that the sound volume of voice data obtained by converting the first output
`sentence thereinto differs from the sound volume of voice data obtained by converting the
`second output sentence thereinto.
`
`Legend
`
`Blue = classifying a voice based on proximity
`Violet = tailoring output based on that classification
`Grey = admittedly known steps
`
`1
`
`

`

`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 3 of 5 PageID# 181
`
`’337 patent, Claim 4
`
`’337 patent, Claims 1-3
`
`4[pre]. A voice-content control method,
`comprising:
`[a] calculating a distance between a user and a
`voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify
`the acquired voice as either one of a first
`voice and a second voice based on the
`distance between the user and the
`voice-content control device;
`[d] analyzing the acquired voice to execute
`processing intended by the user;
`[e] generating, based on content of the
`executed processing, output sentence that
`is text data for a voice to be output to the
`user; and
`[f] adjusting a sound volume of voice data
`obtained by converting the output
`sentence thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as
`the output sentence when the acquired
`voice has been classified as the first
`voice, and
`[2] a second output sentence is generated
`as the output sentence in which a part
`of information included in the first
`output sentence is omitted when the
`acquired voice has been classified as
`the second voice, wherein
`[h] at adjusting the sound volume of voice
`data, further adjusting the sound volume
`of voice data such that the sound volume
`of voice data obtained by converting the
`first output sentence thereinto differs from
`the sound volume of voice data obtained
`by converting the second output sentence
`thereinto.
`
`1[pre]. A voice-content control device,
`comprising:
`[a] a proximity sensor configured to calculate
`a distance between a user and the
`voice-content control device;
`[b] a voice classifying unit configured to
`analyze a voice spoken by a user and
`acquired by a voice acquiring unit to
`classify the voice as either one of a first
`voice or a second voice based on the
`distance between the user and the
`voice-content control device;
`[c] a process executing unit configured to
`analyze the voice acquired by the voice
`acquiring unit to execute processing
`required by the user;
`[d] a voice-content generating unit configured
`to generate, based on content of the
`processing executed by the process
`executing unit, output sentence that is text
`data for a voice to be output to the user;
`and
`[e] an output controller configured to adjust a
`sound volume of voice data obtained by
`converting the output sentence thereinto,
`wherein
`[f] the voice-content generating unit is further
`configured to
`[g] generate a first output sentence as the
`output sentence when the acquired voice
`has been classified as the first voice, and
`[h] generate a second output sentence in
`which information is omitted as compared
`to the first output sentence as the output
`sentence when the acquired voice has
`been classified as the second voice,
`wherein
`[i] the output controller is further configured
`to adjust the sound volume of voice data
`such that the sound volume of voice data
`obtained by converting the first output
`sentence thereinto differs from the sound
`volume of the voice data obtained by
`
`2
`
`

`

`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 4 of 5 PageID# 182
`
`converting the second output sentence
`thereinto.
`
`2[pre]. The voice-content control device
`according to claim 1, wherein
`[a] the process executing unit comprises:
`[b] an intention analyzing unit configured to
`extract intention information indicating an
`intention of the user based on the voice
`acquired by the voice acquiring unit; and
`[c] an acquisition content information
`acquiring unit configured to acquire
`acquisition content information which is
`notified to the user based on the extracted
`intention information, and
`[d] the voice-content generating unit is further
`configured to generate the text data
`including the acquisition content
`information as the output sentence.
`
`3. The voice-content control device according
`to claim 1, wherein, on generating the
`second sentence, the voice-content
`generating unit is further configured to
`omit a part of information included in the
`voice spoken by the user.
`
`’337 patent, Claim 4
`
`’337 patent, Claim 5
`
`4[pre]. A voice-content control method,
`comprising:
`[a] calculating a distance between a user and a
`voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify
`the acquired voice as either one of a first
`voice and a second voice based on the
`distance between the user and the
`voice-content control device;
`[d] analyzing the acquired voice to execute
`processing intended by the user;
`[e] generating, based on content of the
`executed processing, output sentence that
`is text data for a voice to be output to the
`user; and
`
`5[pre]. A non-transitory storage medium that
`stores a voice-content control program
`that causes a computer to execute:
`[a] calculating a distance between a user and a
`voice-content control device;
`[b] acquiring a voice spoken by a user;
`[c] analyzing the acquired voice to classify
`the acquired voice as either one of a first
`voice and a second voice based on the
`distance between the user and the
`voice-content control device;
`[d] analyzing the acquired voice to execute
`processing intended by the user;
`[e] generating, based on content of the
`executed processing, output sentence that
`is text data for a voice to be output to the
`
`3
`
`

`

`Case 3:24-cv-00540-MHL Document 17-1 Filed 09/30/24 Page 5 of 5 PageID# 183
`
`[f] adjusting a sound volume of voice data
`obtained by converting the output
`sentence thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as
`the output sentence when the acquired
`voice has been classified as the first
`voice, and
`[2] a second output sentence is generated
`as the output sentence in which a part
`of information included in the first
`output sentence is omitted when the
`acquired voice has been classified as
`the second voice, wherein
`[h] at adjusting the sound volume of voice
`data, further adjusting the sound volume
`of voice data such that the sound volume
`of voice data obtained by converting the
`first output sentence thereinto differs from
`the sound volume of voice data obtained
`by converting the second output sentence
`thereinto.
`
`user; and
`[f] adjusting a sound volume of voice data
`obtained by converting the output
`sentence thereinto, wherein
`[g] at the generating,
`[1] a first output sentence is generated as
`the output sentence when the acquired
`voice has been classified as the first
`voice, and
`[2] a second output sentence is generated
`as the output sentence in which a part
`of information included in the first
`output sentence is omitted when the
`acquired voice has been classified as
`the second voice, wherein
`[h] at adjusting the sound volume of voice
`data, further adjusting the sound volume
`of voice data such that the sound volume
`of voice data obtained by converting the
`first output sentence thereinto differs from
`the sound volume of voice data obtained
`by converting the second output sentence
`thereinto.
`
`4
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket