`(12) Patent Application Publication (10) Pub. No.: US 2017/0083281 A1
`(43) Pub. Date:
`Mar. 23, 2017
`SHIN
`
`US 20170O83281A1
`
`(54) METHOD AND ELECTRONIC DEVICE FOR
`PROVIDING CONTENT
`
`(71) Applicant:
`
`(72) Inventor:
`
`Samsung Electronics Co., Ltd.,
`Gyeonggi-do (KR)
`Sang Min SHIN, Gyeonggi-do (KR)
`
`(73) Assignee:
`
`Samsung Electronics Co., Ltd.
`
`(21) Appl. No.:
`15/269,406
`
`(22) Filed:
`(30)
`
`Sep. 19, 2016
`Foreign Application Priority Data
`
`Sep. 18, 2015 (KR) ........................ 10-2015-O132488
`
`Publication Classification
`
`(2006.01)
`(2006.01)
`(2006.01)
`
`(51) Int. Cl.
`G06F 3/16
`GIOL 25/63
`GOL 25/78
`(52) U.S. Cl.
`CPC .............. G06F 3/165 (2013.01); G10L 25/78
`(2013.01); G 10L 25/63 (2013.01)
`ABSTRACT
`(57)
`An electronic device and a method are provided. The
`electronic device includes an audio input module configured
`to receive a speech of a user as a voice input, an audio output
`module configured to output content corresponding to the
`Voice input, and a processor configured to determine an
`output scheme of the content based on at least one of a
`speech rate of the speech, a Volume of the speech, and a
`keyword included in the speech, which is obtained from an
`analysis of the Voice input.
`
`OBTAIN SPEECH OF USER AS VOICE INPUT
`
`502
`
`
`
`
`
`
`
`DETERMINE RATE OF MOVEMENT OF USER
`THROUGH ANALYSIS OF WOICE INPUT
`
`DETERMINE OUTPUT SCHEME OF CONTENT BASED
`ON DETERMINED RATE OF MOVEMENT OF USER
`
`504
`
`506
`
`OUTPUT CONTENT CORRESPONDING TO VOICE INPUT
`BASED ON DETERMINED OUTPUT SCHEME
`
`508
`
`-1-
`
`Amazon v. SoundClear
`US Patent 11,069,337
`Amazon Ex. 1003
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 1 of 11
`
`US 2017/0083281 A1
`
`1 O
`
`100
`
`VOICE INPUT
`
`se
`CORRESPONDING CONTENT
`
`O C
`
`FG 1A
`
`10
`
`100
`
`VOICE INPUT
`
`-
`
`CORRESPONDING CONTENT
`
`O C
`
`-2-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 2 of 11
`
`US 2017/0083281 A1
`
`100
`
`2O1
`
`O O
`
`V
`
`/
`O O
`
`FG.2
`
`-3-
`
`
`
`Patent Application Publication
`
`Mar. 23, 2017. Sheet 3 of 11
`
`US 2017/0083281 A1
`
`
`
`
`
`
`
`-4-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 4 of 11
`
`US 2017/0083281 A1
`
`
`
`
`
`
`
`
`
`OBTAN SPEECH OF USER AS VOICE INPUT
`
`401
`
`DETERMINE OUTPUT SCHEME OF CONTENT
`THROUGH ANALYSS OF WOICE INPUT
`
`403
`
`OUTPUT CONTENT CORRESPONDING TO WOICE
`INPUT BASED ON DETERMINED OUTPUT SCHEME
`
`405
`
`FG4
`
`-5-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 5 of 11
`
`US 2017/0083281 A1
`
`OBTAN SPEECH OF USER AS VOICE INPUT
`
`501
`
`DETERMINE DISTANCE BETWEEN USER AND ELECTRONIC
`DEVICE THROUGH ANALYSS OF WOICE INPUT
`
`503
`
`DETERMINE OUTPUT SCHEME OF CONTENT
`BASED ON DETERMINED DISTANCE BETWEEN USER
`AND ELECTRONIC DEVICE
`
`505
`
`OUTPUT CONTENT CORRESPONDING TO WOICE INPUT
`BASED ON DETERMINED OUTPUT SCHEME
`
`507
`
`
`
`
`
`F.G. 5A
`
`-6-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 6 of 11
`
`US 2017/0083281 A1
`
`OBTAN SPEECH OF USER AS VOICE INPUT
`
`502
`
`DETERMINE RATE OF MOVEMENT OF USER
`THROUGH ANALYSS OF WOICE INPUT
`
`DETERMINE OUTPUT SCHEME OF CONTENT BASED
`ON DETERMINED RATE OF MOVEMENT OF USER
`
`504
`
`506
`
`
`
`
`
`
`
`OUTPUT CONTENT CORRESPONDING TO VOICE INPUT
`BASED ON DETERMINED OUTPUT SCHEME
`
`508
`
`-7-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 7 of 11
`
`US 2017/0083281 A1
`
`OBTAN SPEECH OF USER AS VOICE INPUT
`
`601
`
`DETERMINE SPEED OF SPEECH THROUGH
`ANALYSIS OF WOICE INPUT
`
`DETERMINE OUTPUT SCHEME OF CONTENT
`BASED ON DETERMINED SPEED OF SPEECH
`
`603
`
`605
`
`
`
`
`
`
`
`OUTPUT CONTENT CORRESPONDING TO VOICE INPUT
`BASED ON DETERMINED OUTPUT SCHEME
`
`607
`
`F.G. 6
`
`-8-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 8 of 11
`
`US 2017/0083281 A1
`
`OBTAN SPEECH OF USER AS VOICE INPUT
`
`701
`
`DETECT DESIGNATED KEY-WORD THROUGH
`ANALYSIS OF WOICE INPUT
`
`DETERMINE OUTPUT SCHEME OF CONTENT
`BASED ON DETECTED KEY-WORD
`
`703
`
`705
`
`
`
`
`
`
`
`OUTPUT CONTENT CORRESPONDING TO VOICE INPUT
`BASED ON DETERMINED OUTPUT SCHEME
`
`707
`
`FG.7
`
`-9-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 9 of 11
`
`US 2017/0083281 A1
`
`OBTAN SPEECH OF USER AS VOICE INPUT
`
`801
`
`DETERMINEEMOTIONAL STATUS THROUGH
`ANALYSIS OF WOICE INPUT
`
`DETERMINE OUTPUT SCHEME OF CONTENT
`BASED ON DETERMINED EMOTIONAL STATUS
`
`803
`
`805
`
`
`
`
`
`
`
`OUTPUT CONTENT CORRESPONDING TO VOICE INPUT
`BASED ON DETERMINED OUTPUT SCHEME
`
`807
`
`FG.8
`
`-10-
`
`
`
`Patent Application Publication
`
`Mar. 23, 2017. Sheet 10 of 11
`
`US 2017/0083281 A1
`
`|
`
`THÍLIST!)
`
`HÚSNES
`
`016
`
`006
`
`? (096
`
`()
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`8076
`
`0076
`
`0076 -
`
`–|0,76
`
`0076
`
`H076
`
`| 076
`
`^076
`
`HO?NES Å[]
`
`966
`
`| 6 6886986
`F?
`EHE
`?TIT
`
`786 {
`
`0£6
`
`086
`
`
`
`
`
`||| TOENHEIXE
`
`-11-
`
`
`
`Patent Application Publication Mar. 23, 2017. Sheet 11 of 11
`
`US 2017/0083281 A1
`
`1010
`
`APPLICATION 1070
`
`HOME
`1071
`
`DALER
`1072
`
`SMS/MMS
`1073
`
`M
`1074
`
`BROWSER
`1075
`
`CAMERA
`1078
`
`CONTACT
`1078
`
`VOICE DAL
`1079
`
`1080
`
`CALENDAR
`1081
`
`MEDIA PLAYER
`1082
`
`ALBUM
`1083
`
`
`
`ALARM
`1077
`
`CLOCK
`1084
`
`APPLICATION
`MANAGER
`1041
`
`POWER
`MANAGER
`1045
`
`NOTF CATION
`MANAGER
`1049
`
`
`
`
`
`
`
`
`
`
`
`WINDOW
`MANAGER
`1042
`
`DATABASE
`MANAGER
`1046
`
`LOCATION
`MANAGER
`1050
`
`
`
`
`
`
`
`
`
`
`
`AP 1 O 60
`
`MDDLEWARE 1030
`
`MULTIMEDIA
`MANAGER
`1043
`
`PACKAGE
`MANAGER
`104.7
`
`GRAPHC
`MANAGER
`1051
`
`KERNEL 1020
`
`
`
`
`
`RESOURCE
`MANAGER
`104.4
`
`CONNECTIVITY
`MANAGER
`1048
`
`
`
`
`
`SECURITY
`MANAGER
`1052
`
`
`
`
`
`RUNTIME
`LIBRARY
`1035
`
`SYSTEM RESOURCE MANAGER 1021
`
`DEVICE DRIVER 1023
`
`FIG 10
`
`-12-
`
`
`
`US 2017/0083281 A1
`
`Mar. 23, 2017
`
`METHOD AND ELECTRONIC DEVICE FOR
`PROVIDING CONTENT
`
`PRIORITY
`0001. This application claims priority under 35 U.S.C.
`S119(a) to a Korean Patent Application filed in the Korean
`Intellectual Property Office on Sep. 18, 2015 and assigned
`Serial number 10-2015-0132488, the entire disclosure of
`which is incorporated herein by reference.
`
`BACKGROUND
`0002 1. Field of the Disclosure
`0003. The present disclosure relates generally to voice
`input for an electronic device, and more particularly, to a
`method and an electronic device for providing content in
`response to a voice input.
`0004 2. Description of the Related Art
`0005 Recently, user input interfaces applied to electronic
`devices have been capable of receiving user input based on
`Voice input in addition to user input based on physical
`manipulations performed by a user (e.g., an input through a
`physical keypad, a keyboard, a mouse, or touch panel)
`0006 An electronic device that implements a voice input
`interface receives a user's speech as voice input, converts
`the Voice input into an electrical signal, and provides content
`to the user based on the converted electrical signal.
`0007 Electronic devices that supports voice input inter
`faces are capable of providing, for example, content to a user
`by outputting Sound (e.g., outputting a voice). However, the
`user does not share an emotional connection with the
`electronic device, with respect to the content provided in
`response to the Voice input, because the electronic device
`provides the content with a uniform speed, a monotonous
`tone, and a preset Volume, regardless of the user's condition
`while providing the voice input. Furthermore, since the
`electronic device does not consider nuances according to a
`form of the user's speech, it is difficult for the electronic
`device to provide content appropriate for the user's condi
`tion.
`
`SUMMARY
`0008. An aspect of the present disclosure is to address at
`least the above-mentioned problems and/or disadvantages
`and to provide at least the advantages described below.
`Accordingly, an aspect of the present disclosure is to provide
`a content providing method that analyzes a voice input of a
`user's speech and determine an output scheme of content
`based on the various speech features obtained from results of
`the analysis and an electronic device performing the same.
`0009. In accordance with an aspect of the present disclo
`sure, an electronic device is provided. The electronic device
`includes an audio input module configured to receive a
`speech of a user as a voice input, an audio output module
`configured to output content corresponding to the Voice
`input, and a processor configured to determine an output
`scheme of the content based on at least one of a speech rate
`of the speech, a volume of the speech, or a keyword included
`in the speech, which is obtained from an analysis of the
`Voice input.
`0010. In accordance with another aspect of the present
`disclosure, a content providing method of an electronic
`device is provided. The method includes receiving a speech
`of a user as a voice input, determining an output scheme of
`
`content based on at least one of a speech rate of the speech,
`a Volume of the speech, or a keyword included in the speech,
`which is obtained from an analysis of the Voice input, and
`outputting the content corresponding to the Voice input
`based on the determined output Scheme.
`0011. In accordance with another aspect of the present
`disclosure, an instruction, which is recorded on a non
`transitory computer-readable recording medium and
`executed by at least one processor, is configured to cause the
`at least one processor to perform a method. The method
`includes obtaining a speech of a user as a voice input,
`determining an output Scheme of content based on at least
`one of a speech rate of the speech, a Volume of the speech,
`or a keyword included in the speech, which is obtained from
`an analysis of the voice input, and outputting the content
`corresponding to the voice input based on the determined
`output scheme.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0012. The above and other aspects, features, and advan
`tages of certain embodiments of the present disclosure will
`be more apparent from the following description taken in
`conjunction with the accompanying drawings, in which:
`0013 FIGS. 1A and 1B are diagrams illustrating an
`environment in which an electronic device operates, accord
`ing to an embodiment of the present disclosure;
`0014 FIG. 2 is a diagram illustrating an electronic device
`according to an embodiment of the present disclosure;
`0015 FIG. 3 is a block diagram illustrating an electronic
`device according to an embodiment of the present disclo
`Sure;
`0016 FIG. 4 is a flow chart illustrating a content provid
`ing method according to an embodiment of the present
`disclosure;
`0017 FIG. 5A is a flow chart illustrating a content
`providing method based on a distance according to an
`embodiment of the present disclosure;
`(0018 FIG. 5B is a flow chart illustrating a content
`providing method based on a rate of movement of a user
`according to an embodiment of the present disclosure;
`0019 FIG. 6 is a flow chart illustrating a content provid
`ing method based on a speech rate of a user's speech
`according to an embodiment of the present disclosure;
`0020 FIG. 7 is a flow chart illustrating a content provid
`ing method based on a keyword according to an embodiment
`of the present disclosure;
`0021
`FIG. 8 is a flow chart illustrating a content provid
`ing method based on an emotional status of a user according
`to an embodiment of the present disclosure;
`0022 FIG. 9 is a block diagram illustrating an electronic
`device according to embodiments of the present disclosure;
`and
`0023 FIG. 10 is a block diagram illustrating a program
`module according to embodiments of the present disclosure.
`
`DETAILED DESCRIPTION
`0024. Embodiments of the present disclosure may be
`described with reference to accompanying drawings.
`Accordingly, those of ordinary skill in the art will recognize
`that modifications, equivalents, and/or alternatives to the
`embodiments described herein can be variously made with
`out departing from the scope and spirit of the present
`
`-13-
`
`
`
`US 2017/0083281 A1
`
`Mar. 23, 2017
`
`disclosure. With regard to description of drawings, similar
`components may be marked by similar reference numerals.
`0025. Herein, the expressions “have”, “may have”,
`“include”, “comprise”, “may include’, and “may comprise'
`indicate the existence of corresponding features (e.g., ele
`ments such as numeric values, functions, operations, or
`components) but do not exclude presence of additional
`features.
`0026. Herein, the expressions “A or B, “at least one of
`A or/and B, “one or more of A or/and B, and the like may
`refer to any and all combinations of one or more of the
`associated listed items. For example, the terms “A or B', 'at
`least one of A and B, and “at least one of A or B may refer
`to cases in which at least one A is included, at least one B
`is included, or both of at least one A and at least one B are
`included.
`0027. The terms, such as “first”, “second, and the like,
`as used herein, may refer to various elements of embodi
`ments of the present disclosure, but do not limit the ele
`ments. For example, “a first user device' and “a second user
`device' indicate different user devices, regardless of the
`order or priority of the devices. For example, without
`departing the scope of the present disclosure, a first element
`may be referred to as a second element, and similarly, a
`second element may be referred to as a first element.
`0028. When an element (e.g., a first element) is referred
`to as being "(operatively or communicatively) coupled with/
`to’ or "connected to another element (e.g., a second ele
`ment), the element may be directly coupled with/to or
`connected to the other element or an intervening element
`(e.g., a third element) may be present. By contrast, when an
`element (e.g., a first element) is referred to as being “directly
`coupled with/to’ or “directly connected to another element
`(e.g., a second element), it should be understood that there
`are no intervening element (e.g., a third element).
`0029 Depending on the situation, the expression “con
`figured to’, as used herein, may have a definition equivalent
`to “suitable for”, “having the capacity to”, “designed to’,
`“adapted to”, “made to’, or “capable of. The term “con
`figured to' is not limited to being defined as “specifically
`designed to with respect to hardware. Instead, the expres
`sion “a device configured to may indicate that the device is
`“capable of operating together with another device or other
`components. For example, a “processor configured to per
`form A, B, and C may refer to a dedicated processor (e.g.,
`an embedded processor) for performing a corresponding
`operation or a generic-purpose processor (e.g., a central
`processing unit (CPU) or an application processor) that may
`perform corresponding operations by executing one or more
`Software programs that are stored in a memory device.
`0030 Terms used herein are used to describe specified
`embodiments of the present disclosure and are not intended
`to limit the scope of the present disclosure. Singular forms
`of terms may include plural forms unless otherwise speci
`fied. Unless otherwise defined herein, all the terms used
`herein, which include technical or scientific terms, may have
`the same definition that is generally understood by a person
`skilled in the art. It will be further understood that terms that
`are defined in a dictionary and commonly used should also
`be interpreted according to customary definitions in the
`relevant related art and not in an idealized or overly formal
`definitions, unless expressly so defined herein with respect
`to embodiments of the present disclosure. In some cases,
`
`terms that are defined in the specification may not be
`interpreted in a manner that excludes embodiments of the
`present disclosure.
`0031. An electronic device according to an embodiment
`of the present disclosure may include at least one of Smart
`phones, tablet personal computers (PCs), mobile phones,
`video telephones, electronic book readers, desktop PCs,
`laptop PCs, netbook computers, workstations, servers, per
`Sonal digital assistants (PDAs), portable multimedia players
`(PMPs), Motion Picture Experts Group (MPEG-1 or MPEG
`2) Audio Layer 3 (MP3) players, mobile medical devices,
`cameras, or wearable devices. The wearable device may
`include at least one of an accessory type (e.g., watches,
`rings, bracelets, anklets, necklaces, glasses, contact lens, or
`head-mounted-devices (HMDs), a fabric or garment-inte
`grated type (e.g., an electronic apparel), a body-attached
`type (e.g., a skin pad or tattoos), or an implantable type (e.g.,
`an implantable circuit).
`0032. According to an embodiment of the present disclo
`Sure, the electronic device may be a home appliance. The
`home appliances may include at least one of for example,
`televisions (TVs), digital versatile disc (DVD) players,
`audios, refrigerators, air conditioners, cleaners, ovens,
`microwave ovens, washing machines, air cleaners, set-top
`boxes, TV boxes (e.g., Samsung HomeSynctM, AppleTVTM,
`or Google TVTM), game consoles (e.g., XboxTM and Play
`StationTM), electronic dictionaries, electronic keys, cam
`corders, electronic picture frames, and the like.
`0033 According to embodiments of the present disclo
`Sure, the electronic devices may include at least one of
`medical devices (e.g., various portable medical measure
`ment devices (e.g., a blood glucose monitoring device, a
`heartbeat measuring device, a blood pressure measuring
`device, a body temperature measuring device, and the like)),
`a magnetic resonance angiography (MRA) device, a mag
`netic resonance imaging (MRI) device, a computed tomog
`raphy (CT) device, Scanners, and ultrasonic devices), navi
`gation devices, global navigation satellite system (GNSS)
`receivers, event data recorders (EDRs), flight data recorders
`(FDRs), vehicle infotainment devices, electronic equipment
`for vessels (e.g., navigation systems and gyrocompasses),
`avionics, security devices, head units for vehicles, industrial
`or home robots, automatic teller machines (ATMs), points of
`sales (POS) devices, or Internet of things (IoT) devices (e.g.,
`light bulbs, various sensors, electric or gas meters, sprinkler
`devices, fire alarms, thermostats, street lamps, toasters,
`exercise equipment, hot water tanks, heaters, boilers, and the
`like).
`0034. According to embodiments of the present disclo
`Sure, the electronic devices may include at least one of parts
`of furniture or buildings/structures, electronic boards, elec
`tronic signature receiving devices, projectors, or various
`measuring instruments (e.g., water meters, electricity
`meters, gas meters, or wave meters, and the like). The
`electronic device may be one of the above-described devices
`or a combination thereof. An electronic device may be a
`flexible electronic device. Furthermore, an electronic device
`may not be limited to the above-described electronic devices
`and may include other electronic devices and new electronic
`devices according to the development of new technologies.
`0035. Hereinafter, electronic devices according to
`embodiments of the present disclosure will be described
`with reference to the accompanying drawings. The term
`“user' used herein may refer to a person who uses an
`
`-14-
`
`
`
`US 2017/0083281 A1
`
`Mar. 23, 2017
`
`electronic device or may refer to a device (e.g., a device
`implementing an artificial intelligence) that uses an elec
`tronic device.
`0036 FIGS. 1A and 1B are diagrams illustrating an
`environment in which an electronic device operates, accord
`ing to an embodiment of the present disclosure.
`0037 Referring to FIG. 1A, a user 10, who is spaced
`apart from an electronic device 100 by a specific distance
`and is at a standstill, may speak to the electronic device 100.
`The speech of the user 10 may be provided to the electronic
`device 100 as a voice input. For example, the user 10 may
`speak “Let me know what time it is now.” with a moderate
`Volume and at a moderate speech rate, and the speech may
`be provided to the electronic device 100 as a voice input.
`The electronic device 100 may receive the voice input
`through an embedded audio input module (e.g., a micro
`phone) and may generate content corresponding to a result
`of analyzing the Voice input. For example, the electronic
`device 100 may generate content, such as “The current time
`is nine ten AM, in response to a voice input. Such as “Let
`me know what time it is now.” The electronic device 100
`may provide the generated content as Sound through an
`embedded audio output module (e.g., a speaker). In this
`case, since the user 10 provides a voice input with a
`moderate Volume and at a moderate speech rate at a stand
`still, the electronic device 100 may provide the content as
`Sound with a moderate output volume level and at a mod
`erate output speed. As such, the user 10 may be provided
`with the content corresponding to the voice input.
`0038 Referring to FIG. 1B, the user 10 who moves away
`from the electronic device 100 may speak to the electronic
`device 100. For example, the user 10 may be very busy with
`getting ready for work. Therefore, for example, the user 10
`may make a speech, such as “What time is it now?', with a
`louder Volume than usual and a faster speech rate than usual.
`The speech may be provided to the electronic device 100 as
`a voice input.
`0039. The electronic device 100 may receive the voice
`input through the embedded audio input module and may
`generate content corresponding to a result of analyzing the
`voice input. For example, the electronic device 100 may
`generate content, such as “nine ten, in response to the Voice
`input, such as “What time is it now? The electronic device
`100 may provide the generated content as sound through the
`embedded audio output module. In this case, since the user
`10 provides a voice input with a louder volume than usual
`and at a faster speech rate than usual while the user 10 moves
`away from the electronic device 100, the electronic device
`100 may provide the content as sound with a relatively
`louder output volume level and at a relatively faster output
`speed. As such, the user 10 may be provided with content
`corresponding to a voice input.
`0040 FIG. 2 is a diagram illustrating an electronic device
`according to an embodiment of the present disclosure.
`0041
`Referring to FIG. 2, an electronic device according
`to an embodiment of the present disclosure may be imple
`mented with the dedicated electronic device 100 that oper
`ates inside a house. The dedicated electronic device 100 may
`include various modules (e.g., elements of FIG. 3) for
`implementing embodiments according to the present disclo
`Sure, such as a driving system that is capable of providing
`the mobility to the electronic device 100 (e.g., a driving
`motor, various types of articulated joints for robots (e.g. a
`bipedal, quadrupedal robot), a wheel, a propeller, and the
`
`like), a camera that is capable of recognizing a user, an audio
`input module that is capable of receiving a voice input, and
`the like.
`0042. Furthermore, the electronic device according to
`embodiments of the present disclosure may be implemented
`in a form in which a Smartphone 201 and a docking station
`202 are coupled to each other. For example, the Smartphone
`201 may provide a function for implementing embodiments
`of the present disclosure through various modules (e.g., a
`processor, a camera, a sensor, and the like) embodied
`therein. Furthermore, for example, the docking station 202
`may include a charging module (and power Supplying ter
`minal) that is capable of providing power to the Smartphone
`201, a driving system that is capable of providing the
`mobility (e.g., a driving motor, various types of articulated
`robotic joints, a wheel, a propeller, and the like), a high
`power speaker, and the like.
`0043. A configuration of the electronic device, which is
`implementable in various ways as described above, will be
`described below with reference to FIG. 3. Elements to be
`described in FIG. 3 may be included, for example, in the
`electronic device 100 of FIG. 2 or in the smartphone 201
`and/or the docking station 202. A content providing method
`of the electronic device 100 will be described with reference
`to FIGS. 4 to 8.
`0044 FIG. 3 is a block diagram illustrating an electronic
`device according to an embodiment of the present disclo
`SUC.
`0045 Referring to FIG. 3, an electronic device 101
`according to an embodiment of the present disclosure
`includes a bus 110, a processor 120, a memory 130, an audio
`module 150, a display 160, a communication interface 170
`and a distance detection module 180. The electronic device
`101 may not include at least one of the above-described
`elements or may further include other element(s). For
`example, the electronic device 101 may include an input/
`output interface that provides an instruction or data, which
`is inputted from a user or another external device, to any
`other element(s) of the electronic device 101.
`0046 For example, the bus 110 may interconnect the
`above-described elements 110 to 180 and may include a
`circuit for conveying communications (e.g., a control mes
`sage and/or data) among the above-described elements.
`0047. The processor 120 may include one or more of a
`central processing unit (CPU), an application processor
`(AP), or a communication processor (CP). For example, the
`processor 120 may perform an arithmetic operation or data
`processing associated with control and/or communication of
`at least other elements of the electronic device 101. For
`example, the processor 120 may execute a voice recognition
`application (e.g., S-Voice) to perform a content providing
`method according to an embodiment of the present disclo
`SU
`0048. According to embodiments of the present disclo
`Sure, the processor 120 may analyze a voice input received
`through an audio input module 151 and may output content
`corresponding to the Voice input through an audio output
`module 152 in various schemes. For example, the content
`may be provided to a user as sound based on a text to speech
`(TTS) technology.
`0049 According to an embodiment of the present disclo
`Sure, the processor 120 may determine an output scheme of
`the content based on at least one of a speech rate of a user's
`speech, a Volume of a user's speech, and a keyword included
`
`-15-
`
`
`
`US 2017/0083281 A1
`
`Mar. 23, 2017
`
`in the user's speech, which is obtained from an analysis of
`the Voice input. For example, the output scheme may include
`an output volume level, an output speed, and an output
`amount of information of the content to be provided as
`Sound.
`0050 For example, the output volume level of the con
`tent may correspond to a Volume level when the content is
`provided as sound by the audio output module 152. For
`example, the output speed of the content may correspond to
`a speed when the content is played back as Sound by the
`audio output module 152. For example, the output amount of
`information of the content may correspond to an amount of
`information when the content corresponding to a voice input
`is provided as Sound to a user.
`0051. For example, with regard to the output amount of
`information, the content may be classified into detailed
`content that includes rich and extended information and
`abstract content that includes only a gist of the response
`corresponding to the Voice input (a related example is
`described later herein). The detailed content and the abstract
`content may be classified dichotomously. However, embodi
`ments of the present disclosure may not be limited thereto.
`For example, the content may be divided into several levels
`that range from a format (a format in which the output
`amount of information is the greatest), in which the content
`is described most precisely, to a format in which the output
`amount of information is the least and in which the content
`is described most simply. For example, the processor 120
`may vary or adjust an output amount of information by
`extracting and reconfiguring a portion of the content that
`describes the content most precisely. As such, the processor
`120 may adaptively generate content that has various
`amounts of information.
`0052 Furthermore, according to embodiments of the
`present disclosure, the processor 120 may adjust an output
`speed of content based on an output amount of information
`of content. For example, as an output amount of information
`of content to be outputted through the audio output module
`152 increases, an output speed of content may also increase
`under control of the processor 120. For example, an output
`speed of content may be adjusted depending on a change of
`the above-described content abbreviation level.
`0053 According to embodiments of the present disclo
`Sure, the processor 120 may determine an output Scheme of
`corresponding content based on a distance between a user
`and the electronic device 101. For example, the processor
`120 may determine a distance between the user and the
`electronic device 101 based on at least one of the volume of
`the user's speech obtained through an analysis of a voice
`input or the distance computed, calculated, or measured by
`the distance detection module 180. The processor 120 may
`adjust at least one of an output volume level of content, an
`output speed of the content, or an output amount of infor
`mation of the content based on the determined distance
`between the user and the electronic device 101.
`0054 Furthermore, according to embodiments of the
`present disclosure, the processor 120 may determine an
`output scheme of corresponding content based on a rate of
`movement of a user. For example, the processor 120 may
`determine a rate of movement of the user based on at least
`one of the Volume of the speech obtained through an analysis
`of a voice input, a frequency shift of the Voice input (e.g., in
`the case of using Doppler effect), or a fluctuation of the
`distance computed by the distance detection module 180. A
`
`method for determining the rate of movement of the user
`may not be limited to the above-mentioned embodiment of
`the present disclosure, and various voice processing tech
`nologies for determining the rate of movement of the user
`may be used. The processor 120 may adjust at least one of
`an output Volume level of content, an output speed of the
`content, or an output amount of information of the content
`based on the determined rate of movement of the user.
`0055. Furthermore, according to embodiments of the
`present disclosure, the processor 120 may determine an
`output scheme of corresponding content based on a speech
`rate of a user's speech. For example, the processor 120 may
`adjust at least one of an output speed of content or an output
`amount of information of the content based on the speech
`rate of the user's speech obtained through an analysis of a
`Voice input.
`0056 Furthermore, according to embodiments of the
`present disclosure, the processor 120 may determine an
`output Scheme of corresponding content based on a keyword
`included in a user's speech. For example, if an analysis of a
`Voice input indicates that a designated keyword is included
`in a user's speech, the processor 120 may adjust at least one
`of an output speed of content, an output volume level of the
`content, or an output amount of information of the content.
`0057. Furthermore, according to embodiments of the
`present disclosure, the processor 120 may determine an
`output scheme of corresponding content based on an emo
`tional status that is determined based on a speech of a user.
`0058. Furthermore, according to embodiments of the
`present disclosure, the processor 120 may adjust an output
`amount of information of corresponding content based on
`whether a user has an interest in specific content. For
`example, when outputting the content corresponding to a
`voice input, the processor 120 may determine whether a user
`has an interest in specific content based on whether an
`additional question (i.e., an additional voice input) associ
`ated with the content is received after the initial voice input,
`a term frequency of the keyword included in the additional
`question, or the like.
`0059 For example, when it is determined that a user has
`an interest in the specific content, the processor 120 may
`provide more detailed information by increasing an output
`amount of information with respect to the content in which
`the user has an interest. By contrast, for example, the
`processor 120 may decrease the output amount of informa
`tion with respect to the content that is determined as content
`in which the user does not have an inte