`(12) Patent Application Publication
`HA et al.
`
`(10) Pub. No.: US 2016/0284351A1
`(43) Pub. Date:
`Sep. 29, 2016
`
`US 20160284351A1
`
`(54) METHOD AND ELECTRONIC DEVICE FOR
`PROVIDING CONTENT
`
`(71) Applicant: Samsung Electronics Co., Ltd.,
`Suwon-si (KR)
`(72) Inventors: Hye Min HA, Suwon-si (KR); Kyung
`Jun LEE, Suwon-si (KR); Bong Won
`LEE, Seoul (KR); Hyun Yeul LEE,
`Seoul (KR); Pragam RATHORE, Seoul
`(KR)
`(21) Appl. No.: 15/082,499
`(22) Filed:
`Mar 28, 2016
`(30)
`Foreign Application Priority Data
`
`Mar. 26, 2015 (KR) ........................ 10-2015-0042740
`Publication Classification
`
`(51) Int. Cl.
`GIOL I5/22
`G06F 3/048. I
`
`(2006.01)
`(2006.01)
`
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`
`H04N 5/225
`G06F 3/0
`GIOL 7/06
`G06F 3/16
`(52) U.S. Cl.
`CPC ................. G10L 15/22 (2013.01); G 10L 1706
`(2013.01); G06F 3/16 (2013.01); H04N 5/225
`(2013.01); G06F 3/013 (2013.01); G06F
`3/04817 (2013.01)
`
`(57)
`
`ABSTRACT
`
`A method and an electronic device for providing content are
`provided. The electronic device includes a voice input mod
`ule configured to receive a Voice input, an audio output mod
`ule, a display, a memory configured to store a Voice recogni
`tion application which provides content in response to the
`Voice input, and a processor configured to execute the Voice
`recognition application and determine an output Scheme of
`the content to be outputted through the audio output module
`or the display based on a status of the Voice recognition
`application or an operating environment of the electronic
`device.
`
`
`
`RECE WE WOCE NP
`
`DETER NE OUTPUT SCHERE OF CONTENT
`BASE ON SAS - WOCE
`REC is
`\ A
`CA ON (R
`ORA NC NV RON. N.
`:
`V C.
`
`(TR CON ES CORRESPONDENG O
`VO C N
`ASE ON
`ER. NE OR SCERE
`
`-1-
`
`Amazon v. SoundClear
`US Patent 11,069,337
`Amazon Ex. 1011
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 1 of 11
`
`US 2016/0284351A1
`
`
`
`CRON C.
`3.
`
`W. C.
`
`OC
`
`PROCESSOR
`
`AUDIO
`OUTPUT
`MODULE
`
`VIDEO
`OUTPUT
`O)
`
`CAERA
`
`-2-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 2 of 11
`
`US 2016/0284351A1
`
`
`
`RECE VE VOC NR
`
`ETERNE OUTPUT SCHERE OF CONTENT
`BASED ON SAS - WOC
`RECOGN
`ON A.P. CA ON OR
`ORA NG NW RONiiN
`-
`W. C.
`
`(R CONN CORRESRON NG O
`f : C N
`ASE ON
`Rii NE OR SCEA,
`
`
`
`
`
`
`
`
`
`
`
`-3-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 3 of 11
`
`US 2016/0284351A1
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`NC (8
`PERFOR,
`iOSRESCS: i\G
`()
`EONO; CiA
`
`33
`}
`OTPL: DEFA ED NFORMATON
`OF CONTENT CORRESPONDING TO
`CONTEN RFQ. EST TO
`Wii () (i,
`j RiC,
`
`C
`
`is
`
`- CO'N
`R
`(i.
`
`als
`V.
`
`*
`
`395
`- S VOCE -
`- RECOGNION Auctions FOREGROUND
`RNN NS EN HE FOREGRON) OR a
`N THE BACKGROUNE
`s
`s
`
`CON C CC8 Ni
`(
`CORRESPON) : NG TO CON EN RSQES
`E.
`P. i.
`
`
`
`CANE SHAPE OF CON 3ASE ON CONTENT
`
`
`
`
`
`
`
`
`
`OUTPU THE CONTENT TO ADiO
`OUTE
`ODiE - SYNCHRONZA, ON
`SAP CANGE CF (ON
`
`
`
`-- 31
`
`G 3
`
`-4-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 4 of 11
`
`US 2016/0284351A1
`
`
`
`SAE: A
`
`: C3
`
`&a 3xy S8 edge
`
`GAA
`
`-5-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 5 of 11
`
`US 2016/0284351A1
`
`
`
`S.
`
`I-A DO YO
`
`ANT C 02
`
`G 48
`
`-6-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 6 of 11
`
`US 2016/0284351A1
`
`
`
`APPEA
`
`Y 5. A
`
`-7-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 7 of 11
`
`US 2016/0284351A1
`
`
`
`S X APPE
`
`AAXY S6
`
`-8-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 8 of 11
`
`US 2016/0284351A1
`
`
`
`-----------------------------------------------------------------------------------
`
`KWAT DO YOU ANT TO D0?
`
`S S - EAR NORA O' Ox
`(CHO-2 DONG SEO, OIAY,
`S
`iO NG
`RA
`ABO
`EAR NG
`S 88:
`
`Y
`
`SEOCHO-2 DONG, SEOUL
`-
`MARCH 3TUESDAY
`
`PARTAY SNOR AND RAN
`
`SA
`
`Six
`
`t
`
`G6
`
`-9-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 9 of 11
`
`US 2016/0284351A1
`
`RECEW
`
`Given 793 CONTRO
`-ACQUIRED VOICE INPU
`C COMMAND
`sORRESPOND TO CONTROL coic COSA
`NQR CONTENT REQUES21
`
`
`
`PRR, NO ON
`CORRESPND NG
`CONTROL COMMAND
`
`DEEC NOSE AND OR HE NSEER OF
`USERS INCLIDED IN VOICE NPLT
`SY ANAYZ NG O C NP
`
`is
`
`-- 7CF
`f : C
`if rers
`---
`- -,
`- S NOISE NCLUDED
`--4--- Yes
`N VOICE INPUl
`
`
`
`
`
`
`
`
`
`
`
`st
`
`-sa
`
`|
`
`ar
`-----
`^-
`- DOES ONE -
`USER CORRESPOND TO-N--ex.
`NVQ CE INPUl
`
`S.
`
`g
`
`OP EA
`CNN A
`
`NFORäia ON OF
`O is it
`
`-
`
`OUTPUT ABSTRACT
`INFORMATON OF
`CONTENT TO AUDIO
`OU PT MOD LE
`
`- G7
`
`-10-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 10 of 11
`
`US 2016/0284351A1
`
`-
`
`--
`
`--
`
`^:
`
`..., ,
`898 CONTRO
`-
`- DOES
`-ACQUIREVOICE INPS-C W ERCORENSION
`sgORRESPON, OCONTROL COMMANDC
`ul-CORRESPONDING TO
`NQR CONTENT REQUES
`
`r
`s
`r
`s
`
`
`
`CONTENT REQUES
`- is
`8(5.
`- S.SERS lis- No
`a GAZE CARD WIDEO OUTPU
`-
`N MODULF?
`-
`
`
`
`
`
`ABSRAC NFORMA ON OF
`CONTENT TO AUDIO OUTPUT MODULE
`
`
`
`OUR DAE
`\ORA N. O.
`CONTEN
`() A
`O
`P.
`i.).
`
`--83
`SOS
`
`
`
`
`
`-x8-arrararerrararerrararoarrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrxxxx-x-xxxx
`
`-11-
`
`
`
`Patent Application Publication
`
`Sep. 29, 2016 Sheet 11 of 11
`
`US 2016/0284351A1
`
`START
`
`)
`
`
`
`
`
`
`
`
`
`
`
`
`
`-
`
`Y-
`
`- DOES is 9 CONTROL
`-ACQUIRED VOICE INPUfs.
`E.
`SCORRESPONDOCONTROL COMMAND CiAN
`NQR CONTENT REQUES1
`
`Y.
`
`
`
`
`
`ERRA - NCO
`a CORRESFN} \{
`{
`CONTROL COMMAND
`
`CONTENT REQUEST
`-
`-
`905
`las, N.YSERs 4 Yes
`sNCLUDED, NVIDEO ACQUIRED ce
`NEROX CASERAC1
`
`---
`
`
`
`ABSRACT NORA. N. O.
`0
`CONFEN TO AUDIO OUTPUT RODULE
`
`
`
`OUTPUT DEFA ED
`\{}RA ON OF
`CONTEN TO ALDO
`(i. Oji
`
`939
`
`G3
`
`-12-
`
`
`
`US 2016/0284351 A1
`
`Sep. 29, 2016
`
`METHOD AND ELECTRONIC DEVICE FOR
`PROVIDING CONTENT
`
`CROSS-REFERENCE TO RELATED
`APPLICATION(S)
`0001. This application claims the benefit under 35 U.S.C.
`S119(a) of a Korean patent application filed on Mar. 26, 2015
`in the Korean Intellectual Property Office and assigned Serial
`number 10-2015-0042740, the entire disclosure of which is
`hereby incorporated by reference.
`
`TECHNICAL FIELD
`
`0002 The present disclosure relates to a method and an
`electronic device for providing content in response to a Voice
`input.
`
`BACKGROUND
`
`0003 Currently, a user input interface applied to elec
`tronic devices is implemented to Supporta user input based on
`a voice input as well as a user input (e.g., an input through a
`button type keypad, a keyboard, a mouse, a touch panel, and
`the like) based on the physical manipulation by a user.
`0004. An electronic device that has a voice interface, such
`as an interface based on a user's voice, receives the user's
`voice to convert the received user's voice into an electrical
`signal and performs a function set in advance by processing
`the electrical signal.
`0005. The above information is presented as background
`information only to assist with an understanding of the
`present disclosure. No determination has been made, and no
`assertion is made, as to whether any of the above might be
`applicable as prior art with regard to the present disclosure.
`
`SUMMARY
`
`0006 Aspects of the present disclosure are to address at
`least the above-mentioned problems and/or disadvantages
`and to provide at least the advantages described below.
`Accordingly, an aspect of the present disclosure is to provide
`a content providing method which is capable of outputting
`content corresponding to a Voice input in the most Suitable
`way based on an operating environment of the electronic
`device or a status of a Voice recognition application and an
`electronic device performing the same.
`0007. In accordance with an aspect of the present disclo
`sure, the electronic device is provided. The electronic device
`includes a voice input module configured to receive a Voice
`input, an audio output module, a video output module, a
`memory configured to store a voice recognition application
`which provides content in response to the Voice input, and a
`processor configured to execute the Voice recognition appli
`cation, and determine an output scheme of the content to be
`outputted through the audio output module or the video out
`put module based on a status of the Voice recognition appli
`cation or an operating environment of the electronic device.
`0008. Other aspects, advantages, and salient features of
`the disclosure will become apparent to those skilled in the art
`from the following detailed description, which, taken in con
`junction with the annexed drawings, discloses various
`embodiments of the present disclosure.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`0009. The above and other aspects, features, and advan
`tages of certain embodiments of the present disclosure will be
`more apparent from the following description taken in con
`junction with the accompanying drawings, in which:
`0010 FIG. 1 is a block diagram illustrating a configuration
`of an electronic device according to an embodiment of the
`present disclosure;
`0011
`FIG. 2 is a flow chart illustrating a method for pro
`viding content according to an embodiment of the present
`disclosure;
`0012 FIG. 3 is a flow chart illustrating a method for pro
`viding content based on a status of a voice recognition appli
`cation according to an embodiment of the present disclosure;
`0013 FIG. 4A is a diagram illustrating an electronic
`device in which a voice recognition application is running in
`a background according to an embodiment of the present
`disclosure;
`0014 FIG. 4B is a diagram illustrating an electronic
`device in which a voice recognition application is running in
`a foreground according to an embodiment of the present
`disclosure;
`0015 FIG. 5A illustrates an electronic device in which an
`icon is outputted when a voice recognition application is
`running in a background according to an embodiment of the
`present disclosure;
`0016 FIG. 5B illustrates an electronic device in which a
`shape of an icon is changed according to an embodiment of
`the present disclosure;
`0017 FIG. 6 illustrates an electronic device in which
`detailed information of content is displayed according to an
`embodiment of the present disclosure;
`0018 FIG. 7 is a flow chart illustrating a method for pro
`viding content based on an analysis of a Voice input according
`to an embodiment of the present disclosure;
`0019 FIG. 8 is a flow chart illustrating a method for pro
`viding content based on determination of a user's gaZe
`according to an embodiment of the present disclosure; and
`0020 FIG. 9 is a flow chart illustrating a method for pro
`viding content based on a video processing according to an
`embodiment of the present disclosure.
`0021. Throughout the drawings, it should be noted that
`like reference numbers are used to depict the same or similar
`elements, features, and structures.
`
`DETAILED DESCRIPTION
`0022. The following description with reference to accom
`panying drawings is provided to assist in a comprehensive
`understanding of various embodiments of the present disclo
`Sure as defined by the claims and their equivalents. It includes
`various specific details to assist in that understanding but
`these are to be regarded as merely exemplary. Accordingly,
`those of ordinary skill in the art will recognize that various
`changes and modifications of the various embodiments
`described herein can be made without departing from the
`Scope and spirit of the present disclosure. In addition, descrip
`tions of well-known functions and constructions may be
`omitted for clarity and conciseness.
`0023 The terms and words used in the following descrip
`tion and claims are not limited to the bibliographical mean
`ings, but, are merely used by the inventor to enable a clear and
`consistent understanding of the present disclosure. Accord
`ingly, it should be apparent to those skilled in the art that the
`
`-13-
`
`
`
`US 2016/0284351 A1
`
`Sep. 29, 2016
`
`following description of various embodiments of the present
`disclosure is provided for illustration purpose only and not for
`the purpose of limiting the present disclosure as defined by
`the appended claims and their equivalents.
`0024. It is to be understood that the singular forms “a.”
`“an and “the include plural referents unless the context
`clearly dictates otherwise. Thus, for example, reference to “a
`component Surface' includes reference to one or more of such
`Surfaces.
`0025. An electronic device according to various embodi
`ments of the present disclosure may include at least one of
`Smartphones, tablet personal computers (PCs), mobile
`phones, video telephones, electronic book readers, desktop
`PCs, laptop PCs, netbook computers, workstations, servers,
`personal digital assistants (PDAs), portable multimedia play
`ers (PMPs), Motion Picture Experts Group (MPEG-1 or
`MPEG-2) Audio Layer 3 (MP3) players, mobile medical
`devices, cameras, or wearable devices. According to various
`embodiments of the present disclosure, a wearable device
`may include at least one of an accessory type (e.g., watch,
`ring, bracelet, ankle bracelet, necklace, glasses, contact lens,
`or head-mounted-device (I-IMD)), a fabric or clothing type
`(e.g., electronic apparel), a physical attachment type (e.g.,
`skin pad or tattoo), or a body implantation type (e.g., implant
`able circuit).
`0026. According to various embodiments, the electronic
`device may be one of the above-described devices or a com
`bination thereof. An electronic device according to an
`embodiment may be a flexible electronic device. Further
`more, an electronic device according to an embodiment may
`not be limited to the above-described electronic devices and
`may include other electronic devices and new electronic
`devices according to the development of technologies.
`0027. Hereinafter, electronic devices according to an
`embodiment of the present disclosure will be described with
`reference to the accompanying drawings. The term “user”
`used herein may refer to a person who uses an electronic
`device or may refer to a device (e.g., an artificial intelligence
`electronic device) that uses an electronic device.
`0028 FIG. 1 is a block diagram illustrating a configuration
`of an electronic device according to an embodiment of the
`present disclosure.
`0029 Referring to FIG. 1, an electronic device 100
`according to an embodiment of the present disclosure may
`include a bus 110, a processor 120, a memory 130, a voice
`input module 140, an audio output module 150, a video output
`module 160, and a camera 170. The electronic device 100
`may not include at least one of the above-described compo
`nents or may further include other component(s).
`0030 The bus 110 may interconnect the above-described
`components 120 to 170 and may be a circuit for conveying
`communications (e.g., a control message and/or data) among
`the above-described components.
`0031. The processor 120 may include one or more of a
`central processing unit (CPU), an application processor (AP),
`or a communication processor (CP). The processor 120 may
`perform, for example, data processing or an operation asso
`ciated with control and/or communication of at least one
`other component(s) of the electronic device 100.
`0032. The processor 120 may execute a voice recognition
`application (e.g., S-Voice) stored in the memory 130 and may
`convert a voice input to a control command or a content
`request based on the Voice recognition application. If the
`Voice input is converted to the control command, the proces
`
`sor 120 may control various modules included in the elec
`tronic device 100 based on the control command. For
`example, in the case where a voice input is “Turn on Blue
`tooth., the processor 120 may activate a Bluetooth module
`embedded in the electronic device 100.
`0033. Furthermore, if the voice input is converted to a
`content request, the processor 120 may output corresponding
`content based on the content request. For example, if a Voice
`input, such as “Let me know today's weather’, is converted to
`a request about weather content, the processor 120 may pro
`vide the weather content to a user.
`0034. According to an embodiment of the present disclo
`Sure, the processor 120 may determine an output scheme of
`content to be outputted through the audio output module 150
`or the video output module 160 based on a status of the voice
`recognition application or an operating environment of the
`electronic device 100. The processor 120 may output content
`corresponding to the Voice input through the audio output
`module 150 or the video output module 160 based on the
`determined output scheme.
`0035. For example, in the case where the voice recognition
`application is running in a foreground of the electronic device
`100, the processor 120 may determine to control the video
`output module 160 to output detailed information of content
`corresponding to a Voice input. Furthermore, in the case
`where the Voice recognition application is running in a back
`ground of the electronic device, the processor 120 may deter
`mine to control the video output module 160 to output an icon
`associated with the content.
`0036. In this specification, that an application is running in
`the foreground should be understood as an execution screen
`of the application is displayed on the whole area or the almost
`whole area of the video output module 160 of the electronic
`device 100. Furthermore, a state in which an application is
`running in the background should be understood as the appli
`cation is running in a non-foreground state.
`0037 For example, if the voice recognition application is
`running in the foreground, a screen of the Voice recognition
`application (e.g., a screen displayed according to S-Voice,
`refer to FIG. 4B) may be displayed on the video output
`module 160. At this time, if a voice input is received from a
`user, the processor 120 may display corresponding content on
`a screen of the Voice recognition application in response to the
`Voice input.
`0038. However, if the voice recognition application is run
`ning in the background, a screen (e.g., a screen displayed
`according to a web browsing application, refer to FIG. 4A) of
`an application which is different from the Voice recognition
`application may be displayed on the video output module
`160. At this time, if a voice input is received from a user, the
`processor 120 may additionally display an icon (e.g., a first
`icon 501 of FIG.5A) associated with the content on the screen
`of another application in response to the Voice input. At this
`time, if the user selects the icon (e.g., touch), the processor
`120 may output detailed information of the content associated
`with the icon through the video output module 160 in
`response to the selection of the icon.
`0039. Furthermore, if a voice input is received from a user
`while the Voice recognition application is running in the back
`ground, the processor 120 may display the icon and may
`simultaneously change a shape of the icon based on the con
`tent.
`0040. For example, the icon associated with the content
`may be dynamically implemented in the form of an anima
`
`-14-
`
`
`
`US 2016/0284351 A1
`
`Sep. 29, 2016
`
`tion. Furthermore, content corresponding to a Voice input
`may be provided to a user through the audio output module
`150 based ontext to speech (hereinafter referred to as “TTS”).
`In this case, the processor 120 may perform synchronization
`between an output of content by the audio output module 150
`and a shape change of an icon displayed on the video output
`module 160.
`0041
`Furthermore, the processor 120 according to an
`embodiment of the present disclosure may analyze a Voice
`input received from the voice input module 140 and may
`detect a noise included in the voice input and/or the number of
`users corresponding to the Voice input based on the analyzed
`result. The processor 120 may determine an outputscheme of
`content to be outputted through the audio output module 150
`based on the detection result.
`0042. For example, if a noise which satisfies a specified
`condition is detected in a voice input, the processor 120 may
`determine to control the audio output module 150 to output
`abstract information of content corresponding to the Voice
`input. Furthermore, if the noise which satisfies the specified
`condition is not detected in the voice input, the processor 120
`may determine to control the audio output module 150 to
`output detailed information of content.
`0043. The noise which satisfies the specified condition
`may be detected based on an analysis of frequency, wave
`shape, or amplitude. For example, the noise which satisfies
`the specified condition may include a residential noise at an
`outdoor space. Furthermore, a usual white noise may not be
`considered in determining whether the noise which satisfies
`the specified condition is included in the Voice input. Accord
`ingly, the processor 120 may determine whether the user is in
`an outdoor public space or in a private space.
`0044) Furthermore, according to an embodiment, if a plu
`rality of users corresponding to avoice input is detected in the
`voice input, the processor 120 may determine to control the
`audio output module 150 to output abstract information of
`content. If a user corresponding to the Voice input is detected
`in the voice input, the processor 120 may determine to control
`the audio output module 150 to output detailed information of
`COntent.
`0045. The detection of the number of users corresponding
`to a voice input may be performed, for example, through a
`frequency analysis about the voice input. The processor 120
`may determine whether only a user who uses the electronic
`device 100 exists in the vicinity of the electronic device 100 or
`whether the user exists with another user in the vicinity of the
`electronic device 100, through the frequency analysis.
`0046 According to an embodiment of the present disclo
`Sure, the processor 120 may determine an output Scheme of
`the video output module 160 based on an outputscheme of the
`audio output module 150. For example, if a noise which
`satisfies a specified condition is detected in a voice input or if
`a plurality of users corresponding to the Voice input is
`detected in the voice input, the processor 120 may output
`abstract information of content through the audio output
`module 150 and may simultaneously output detailed infor
`mation of the content through the video output module 160.
`0047. The processor 120 according to an embodiment of
`the present disclosure may determine an outputscheme of the
`content to be outputted through the audio output module 150
`based on a result of determining a gaze at the camera 170.
`0048 For example, if it is determined that a user's gaze is
`toward the video output module 160, the processor 120 may
`determine to control the audio output module 150 to output
`
`abstract information of content. If it is determined that the
`user's gaze is not toward the video output module 160, the
`processor 120 may determine to control the audio output
`module 150 to output detailed information of content.
`0049. The processor 120 may determine an outputscheme
`of the video output module 160 based on an output scheme of
`the audio output module 150. For example, if it is determined
`that a user's gaze is toward the video output module 160, the
`processor 120 may output abstract information of content
`through the audio output module 150 and may simulta
`neously output detailed information of the content through
`the video output module 160.
`0050. Furthermore, the processor 120 according to an
`embodiment of the present disclosure may determine an out
`put scheme of the content to be outputted through the audio
`output module 150 or the video output module 160 based on
`a video of the vicinity of the electronic device 100 shot by the
`camera 170.
`0051. For example, the processor 120 may determine the
`output scheme of the content based on the number of users
`included in the shot video. When determining the number of
`users, the processor 120 may apply a face recognition algo
`rithm to the shot video and may determine the number of
`users included in the shot video or may recognize a specific
`USC.
`For example, if it is determined that a user is
`0.052
`included in the shot video, the processor 120 may determine
`to control the audio output module 150 and/or the video
`output module 160 to output detailed information of content.
`In contrast, if it is determined that a plurality of users is
`included in the shot video, the processor 120 may determine
`to control the audio output module 150 and/or the video
`output module 160 to output abstract information of content
`or not to output information of content.
`0053. Furthermore, as another example, if it is determined
`that an authenticated user of the electronic device 100 is
`included in the shot video, the processor 120 may determine
`to control the audio output module 150 and/or the video
`output module 160 to output detailed information of content.
`In contrast, if it is determined that an unauthenticated user is
`included in the shot video, the processor 120 may determine
`to control the audio output module 150 and/or the video
`output module 160 to output abstract information of content
`or not to output information of content.
`0054 Abstract information of content may be information
`of a portion of content or Summarized information of the
`content or may correspond to a portion of the content. In some
`embodiments, the abstract information of content may be
`understood as including an icon associated with the content.
`Furthermore, detailed information of content may be all
`information about the content.
`0055. The abstract information and the detailed informa
`tion of the content may be acoustically provided to a user
`through the audio output module 150 based on a voice output
`function, such as TTS. Furthermore, abstract information and
`detailed information of the content may be visually provided
`to the user through the video output module 160.
`0056. The memory 130 may include a volatile and/or a
`nonvolatile memory. For example, the memory 130 may store
`instructions or data associated with at least one other compo
`nent(s) of the electronic device 100. According to an embodi
`ment of the present disclosure, the memory 130 may store
`Software, an application program which performs a content
`providing method according to various embodiments of the
`
`-15-
`
`
`
`US 2016/0284351 A1
`
`Sep. 29, 2016
`
`present disclosure, a Voice recognition application, a web
`browsing application and data for executing the above-men
`tioned software or applications. For example, the memory
`130 may store a control command and/or a content request
`corresponding to a Voice input or may store abstract informa
`tion and/or detailed information of content corresponding to
`the content request.
`0057 The voice input module 140 may receive a voice
`input uttered from a user. It may be understood that the voice
`input module 140 includes a physical microphone and addi
`tionally, a circuitry (e.g., analog-digital converter (ADC))
`which performs a signal processing about the received Voice
`input.
`0058. The audio output module 150 may include a
`speaker, a headphone, an earphone, a corresponding driver,
`an audio output interface, and the like. Abstract information
`or detailed information of content may be outputted as a
`sound through the audio output module 150 so as to be acous
`tically provided to a user.
`0059 For example, the video output module 160 may
`correspond to a display. The display may include, for
`example, a liquid crystal display (LCD), a light-emitting
`diode (LED) display, an organic LED (OLED) display, a
`microelectromechanical systems (MEMS) display, or an
`electronic paper display. Abstract information or detailed
`information of content may be outputted as a screen through
`the video output module 160 so as to be visually provided to
`a U.S.
`0060. The video output module 160 may display, for
`example, Various contents (e.g., a text, an image, a Video, an
`icon, a symbol, and the like) to a user. The video output
`module 160 may include a touch screen and may receive, for
`example, a touch, gesture, proximity, or hovering input using
`an electronic pen or a portion of a user's body.
`0061
`For example, the camera170 may shoot a still image
`and a video. According to an embodiment of the present
`disclosure, at least one of the camera 170 may include one or
`more image sensors, lenses, an image processing module, and
`the like. Furthermore, the electronic device 100 may include
`one or more cameras 170. The camera 170 may shoot a video
`of the vicinity of the electronic device 100 and at least a
`portion (e.g., face) of a user's body existing in the vicinity of
`the electronic device 100.
`0062 According to an embodiment, the camera 170 (in
`conjunction with the processor 120) may determine whether
`a user of the electronic device 100 is detected. For example, a
`method for tracking a user's pupil (so-called, eye tracking)
`may be used in determining a user's gaze.
`0063 FIG. 2 is a flow chart illustrating a method for pro
`viding content according to an embodiment of the present
`disclosure.
`0064 Referring to FIG.2, in operation 201, the voice input
`module 140 may receive a Voice input from a user through a
`Voice recognition application. The received Voice input may
`be provided to the processor 120 which executes the voice
`recognition application.
`0065. In operation 203, the processor 120 may determine
`content to be outputted based on the Voice input and may
`determine an outputscheme of the content based on a status of
`the Voice recognition application oran operating environment
`of the electronic device 100. The content to be outputted may
`be set in advance by the Voice recognition application in
`response to the Voice input.
`
`0066. In operation 205, the processor 120 may output the
`content corresponding to the Voice input based on the output
`scheme determined in operation 203 using at least one of the
`audio output module 150 or the video output module 160.
`0067 FIG. 3 is a flow chart illustrating a method for pro
`viding content based on a status of a voice recognition appli
`cation according to an embodiment of the present disclosure.
`0068 Referring to FIG. 3, a content providing method
`based on a status of a voice recognition application according
`to an embodiment of the present disclosure may include
`operation 301 to operation 317.
`0069. In operation 301, the voice input module 140 may
`receive a voice input from a user. For example, the Voice input
`may be a simple word, such as “weather', 'schedule”, “Blue
`tooth”, “wireless fidelity (Wi-Fi), and the like, or may be a
`sentence, such as “Let me know today's weather.”, “Let me
`know today’s schedule”, “Read today’s news.”, “Turn on
`Bluetooth”, “Turn on Wi-Fi, and the like. Furthermore, the
`Voice input may be a control command (e.g., "Turn on Blue
`tooth.”) to control a specific module embedded in the elec
`tronic device 100 or may be a content request (e.g., “Let me
`know today's weather') which requests specific content.
`0070. In operation 303, the processor 120 may determine
`whether the received voice input is the control command
`about the module embedded in the electronic device 100 or is
`the content request which requests an output of the specific
`content. If the Voice input corresponds to the content request,
`the procedure proceeds to operation 305. If the voice input
`corresponds to the control command, the procedure proceeds
`to operation 317. For example, if the voice input corresponds
`to a “weather, the procedure proceeds to operation 305. If the
`voice input corresponds to “Turn on Bluetooth’, the proce
`dure proceeds to operation 317.
`0071. In operation 305, the processor 120 may determine
`a status of the Voice recognition application and may deter
`mine whether the Voice recognition application is running in
`the background or in the foreground. If the Voice recognition
`application is running in the background, the procedure pro
`ceeds to operation 307. If the voice recognition application is
`running in the foreground, the procedure proceeds to opera
`tion 313.
`0072 FIG. 4A illustrates an electronic device in which a
`Voice recognition application is running in a background
`according to an embodiment of the present disclosure.
`0073 Referring to FIG. 4A, a web browsing application
`may be running in the foreground in the electronic device 100.
`A screen 400a of the web browsing application may be dis
`played on the video output module 160 of the electronic
`device 100. If the Voice recognition application is running in
`th