`
`(12) United States Patent
`Iyer et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 9.489,172 B2
`Nov. 8, 2016
`
`(54) METHOD AND APPARATUS FOR VOICE
`CONTROL USER INTERFACE WITH
`DISCREET OPERATING MODE
`
`2011/0313768 A1* 12/2011 Klein ...................... Gogg
`2013/0076990 A1
`3/2013 Kim et al.
`2013/0260839 A1 * 10/2013 Moquin ................ HO4M 1,605
`
`2013,0293503 A1 ck 11/2013 Zhou .
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`8, 2014 Han et al.
`2014/0223477 A1
`9, 2014 Gunn et al.
`2014/0278443 A1
`2015,0135078 A1* 5, 2015 Erkkila ............... G06F 19,3406
`
`455,569.1
`. G06F 3.04883
`345,173
`
`O O
`(71) Applicant: Motorola Mobility LLC, Chicago, IL
`(US)
`(72) Inventors: Boby Iyer, Elmhurst, IL (US); Kevin O
`Foy, Chicago, IL (US)
`
`(*) Notice:
`
`O O
`rsr rr
`(73) Assignee: Motorola Mobility LLC, Chicago, IL
`(US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`(21) Appl. No.: 14/631,924
`(22) Filed:
`Feb. 26, 2015
`(65)
`Prior Publication Data
`US 2016/O253,149 A1
`Sep. 1, 2016
`(51) Int. Cl.
`GOIL 2L/00
`GOL 5/00
`G06F 3/16
`GOL 7/22
`G06F 3/0
`(52) U.S. Cl.
`CPC ............... G06F 3/167 (2013.01); G06F 3/017
`(2013.01); G 10L 1722 (2013.01)
`(58) Field of Classification Search
`None
`See application file for complete search history.
`
`(2006.01)
`(2013.01)
`(2006.01)
`(2013.01)
`(2006.01)
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`6,925,296 B2 * 8/2005 Mattisson ........... HO4M 1,6016
`367,118
`7.263,373 B2 * 8/2007 Mattisson ............. HO4M 1,605
`367,118
`... GO9G 5.00
`9,183,806 B2 * 11/2015 Felt ...
`9,311,898 B2 * 4/2016 Ward .................... G06F 3.0488
`4/2006 Jain
`2006/0085183 A1
`11/2006 Garratt et al.
`2006/0270450 A1
`10/2008 Kadaba et al.
`2008/0243281 A1
`
`
`
`2015,0148106 A1 ck
`
`5/2015 Choi .
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`71.5/727
`. HO4W 52/O27
`455,566
`2016,0077794 A1* 3, 2016 Kim ........................ Gog, 9.
`2016/0086600 A1
`3/2016 Bauer ..................... G1 OL15, 16
`TO4/2O2
`2016/0202871 A1* 7, 2016 Ward .................... G06F 3/0488
`34.5/175
`
`FOREIGN PATENT DOCUMENTS
`
`WO WO2014, 163284
`
`5, 2011
`
`OTHER PUBLICATIONS
`
`Application No. GB1603200.5; Great Britain Search Report;
`Mailed Aug. 15, 2016.
`* cited by examiner
`Primary Examiner — Satwant Singh
`(57)
`ABSTRACT
`An electronic device includes a voice control interface
`engine operative in a first mode to receive a speech com
`mand, through a microphone, from a first distance and
`produce, through a loudspeaker and in response to the
`speech command, an audible output at a first output level.
`One or more processors are operable with one or more
`sensors to detect a predefined user input. The one or more
`processors can then transition the Voice control interface
`engine to a second mode operative to receive the speech
`command from a second distance and produce, in response
`to the speech command, the audible output at a second
`output level, where the second distance is less than the first
`distance and the second output level less than the first output
`level.
`
`20 Claims, 6 Drawing Sheets
`
`
`
`208
`
`s
`
`244
`
`NSSR
`
`SPLAY
`days:
`JSERINT
`
`EpiSEE Algision
`d
`MoU:
`PRKES5CR
`UTPUT
`
`
`
`WESS
`cOMMUNICATION
`XEWC8
`
`-1-
`
`Amazon v. SoundClear
`US Patent 11,069,337
`Amazon Ex. 1012
`
`
`
`U.S. Patent
`
`Nov. 8, 2016
`
`Sheet 1 of 6
`
`US 9,489,172 B2
`
`
`
`HOWTALLIS u
`SEARS TOWER2
`
`
`
`
`
`| "1451 FEET"
`
`3
`
`- 104
`
`
`
`... iii.; fl.
`
`-2-
`
`
`
`U.S. Patent
`
`Nov. 8, 2016
`
`Sheet 2 of 6
`
`US 9,489,172 B2
`
`
`
`
`
`
`
`
`
`SAY
`OVCE/
`e
`|
`SER NP
`PROXivili Y
`SENSORS *
`
`to
`
`try
`
`AUDIO
`NPA
`C
`
`.
`S.C.REE
`MODUE
`
`AA
`PROCESSCR
`
`
`
`O
`SENSORS-
`
`f 2sesses
`
`Y
`
`-
`
`ERY
`
`
`
`-3-
`
`
`
`U.S. Patent
`
`Nov. 8, 2016
`
`Sheet 3 of 6
`
`US 9,489,172 B2
`
`EE
`(GSR
`
`\ { What timey
`i
`ny date with
`Buster?
`
`7 pm at Macy/
`Restaurant
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`CASE -
`ENER
`SCRIE
`MODE
`
`
`
`-4-
`
`
`
`Sheet 4 of 6
`Sheet 4 of6
`
`US 9,489,172 B2
`US 9,489,172 B2
`
`U.S. Patent
`U.S. Patent
`
`Nov. 8, 2016
`Nov. 8, 2016
`
`
`
`
`
`-5-
`
`-5-
`
`
`
`U.S. Patent
`
`Nov. 8, 2016
`
`Sheet S of 6
`
`US 9,489,172 B2
`
`Reside;
`ss.
`
`W
`
`-
`
`2
`
`60
`
`swesweswesweswasswisweswesweswrtswesweswasswisss
`a
`
`EECA
`NY
`SEC
`(SR
`
`EX --
`SiR
`AOE
`
`
`
`
`
`
`
`
`
`
`
`
`
`-6-
`
`
`
`U.S. Patent
`
`Nov. 8, 2016
`
`Sheet 6 of 6
`
`US 9,489,172 B2
`
`CERA
`
`AO
`
`
`
`
`
`
`
`
`
`2.
`
`CNY
`
`E-ANGE VE
`
`CONAY
`RED REC ADO
`
`ANGE VE
`
`-7-
`
`
`
`US 9,489,172 B2
`
`1.
`METHOD AND APPARATUS FOR VOICE
`CONTROL USER INTERFACE WITH
`DISCREET OPERATING MODE
`
`BACKGROUND
`
`10
`
`25
`
`30
`
`1. Technical Field
`This disclosure relates generally to electronic devices and
`corresponding methods, and more particularly to electronic
`devices with Voice recognition systems.
`2. Background Art
`Mobile electronic communication devices, such as mobile
`telephones, Smartphones, gaming devices, and the like, are
`used by billions of people. These owners use mobile com
`munication devices for many different purposes including,
`15
`but not limited to, Voice communications and data commu
`nications for text messaging, Internet browsing, commerce
`Such as banking, and Social networking.
`As the technology of these devices has advanced, so too
`has their feature set. For example, not too long ago all
`electronic devices had physical keypads. Today touch sen
`sitive displays are more frequently seen as user interface
`devices. Similarly, it used to be that the only way to deliver
`user input to a device was with touch, either through a
`keypad or touch sensitive display. Today some devices are
`equipped with Voice recognition that allows a user to speak
`commands to a device instead of typing them.
`Unforeseen problems sometimes accompany technologi
`cal advance. Illustrating by example, when near-field wire
`less devices such as hands-free headsets first appeared,
`people began conducting telephone calls through a small,
`wireless earbud device while their mobile phone was in a
`bag or pocket. To the innocent observer, it looked as if these
`technologically advanced people were instead crazy since
`they talked aloud to what appeared to be—themselves.
`Some early adopters continued to hold their mobile phones
`in their hands, albeit unnecessary, just to show passersby that
`they were indeed conducting a telephone call as opposed to
`ranting to the air.
`It would be advantageous to have additional solutions, in
`the form of an improved apparatus, an improved method, or
`both, to these unforeseen problems resulting from techno
`logical advances occurring in electronic devices.
`
`35
`
`40
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`45
`
`FIG. 1 illustrates a user interacting with a prior art
`electronic device having voice recognition capabilities.
`FIG. 2 illustrates a schematic block diagram of one
`explanatory electronic device in accordance with one or
`more embodiments of the disclosure.
`FIG. 3 illustrates one explanatory method in accordance
`with one or more embodiments of the disclosure.
`FIG. 4 illustrates an alternate method step suitable for use
`with explanatory methods in accordance with one or more
`embodiments of the disclosure.
`FIG. 5 illustrates an alternate method step suitable for use
`with explanatory methods in accordance with one or more
`embodiments of the disclosure.
`FIG. 6 illustrates another explanatory method in accor
`dance with one or more embodiments of the disclosure.
`FIG. 7 illustrates another explanatory method in accor
`dance with one or more embodiments of the disclosure.
`Skilled artisans will appreciate that elements in the figures
`are illustrated for simplicity and clarity and have not nec
`essarily been drawn to scale. For example, the dimensions of
`Some of the elements in the figures may be exaggerated
`
`50
`
`55
`
`60
`
`65
`
`2
`relative to other elements to help to improve understanding
`of embodiments of the present disclosure.
`
`DETAILED DESCRIPTION OF THE DRAWINGS
`
`Before describing in detail embodiments that are in accor
`dance with the present disclosure, it should be observed that
`the embodiments reside primarily in combinations of
`method steps and apparatus components related to transi
`tioning a voice control interface engine of an electronic
`device from a first mode, where voice commands can be
`received from a first distance and audible output returned at
`a first output level, to a second mode where voice commands
`are received from only a second, shorter distance and
`audible output is delivered at a second, lesser output level.
`Process descriptions or blocks in a flow chart can be
`modules, segments, or portions of code that implement
`specific logical functions of a machine or steps in a process,
`or alternatively that transition specific hardware components
`into different states or modes of operation. Alternate imple
`mentations are included, and it will be clear that functions
`may be executed out of order from that shown or discussed,
`including Substantially concurrently or in reverse order,
`depending on the functionality involved.
`It will be appreciated that embodiments of the disclosure
`described herein may be comprised of one or more conven
`tional processors and unique stored program instructions
`that control the one or more processors to implement, in
`conjunction with certain non-processor circuits, some, most,
`or all of the functions of transitioning a voice control
`interface engine of an electronic device between a first mode
`and a second mode as described herein. The non-processor
`circuits may include, but are not limited to, microphones,
`loudspeakers, acoustic amplifiers, digital to analog convert
`ers, signal drivers, clock circuits, power source circuits, and
`user input devices. As such, these functions may be inter
`preted as steps of a method to perform the transition of a
`Voice control interface engine between a first mode and a
`second mode of operation. Alternatively, some or all func
`tions could be implemented by a state machine that has no
`stored program instructions, or in one or more application
`specific integrated circuits (ASICs), in which each function
`or some combinations of certain of the functions are imple
`mented as custom logic. Of course, a combination of the two
`approaches could be used. Thus, methods and means for
`these functions have been described herein. Further, it is
`expected that one of ordinary skill, notwithstanding possibly
`significant effort and many design choices motivated by, for
`example, available time, current technology, and economic
`considerations, when guided by the concepts and principles
`disclosed herein will be readily capable of generating Such
`Software instructions and programs and ICs with minimal
`experimentation.
`Embodiments of the disclosure are now described in
`detail. Referring to the drawings, like numbers indicate like
`parts throughout the views. As used in the description herein
`and throughout the claims, the following terms take the
`meanings explicitly associated herein, unless the context
`clearly dictates otherwise: the meaning of “a,” “an,” and
`“the includes plural reference, the meaning of “in” includes
`“in” and “on.” Relational terms such as first and second, top
`and bottom, and the like may be used solely to distinguish
`one entity or action from another entity or action without
`necessarily requiring or implying any actual Such relation
`ship or order between Such entities or actions. As used
`herein, components may be “operatively coupled when
`information can be sent between Such components, even
`
`-8-
`
`
`
`US 9,489,172 B2
`
`10
`
`15
`
`30
`
`35
`
`40
`
`3
`though there may be one or more intermediate or intervening
`components between, or along the connection path. Also,
`reference designators shown herein in parenthesis indicate
`components shown in a figure other than the one in discus
`sion. For example, talking about a device (10) while dis
`cussing figure A would refer to an element, 10, shown in
`figure other than figure A.
`Embodiments of the disclosure provide methods and
`apparatuses for transitioning a voice control interface engine
`operating in an electronic device between a normal mode of
`operation and a discreet mode of operation. In one embodi
`ment, a Voice control interface engine is operable to receive
`Voice commands and deliver audible responses to a user. For
`example, the Voice control interface engine can receive a
`speech command in which a user asks a question. The
`electronic device may then search the Internet for the answer
`and, in response to receiving the speech command, deliver
`an audible output to the user with the answer.
`Embodiments of the disclosure contemplate that one
`unforeseen consequence of Voice recognition systems is that
`a user may not want passersby to hear the audible output.
`This is especially true when the audible output includes an
`enunciation of personal information. With this unforeseen
`problem in mind, embodiments of the disclosure advanta
`geously provide a method and system to cause the Voice
`25
`control interface engine to inter a second, “discreet' mode of
`operation.
`In one embodiment, a voice control interface engine
`operating in an electronic device is operative in a first mode.
`The first mode, in one embodiment, is a normal mode of
`operation or a default mode of operation. When operating in
`this mode, the voice control interface engine is operable to
`receive a speech command, through a microphone, from a
`first distance. The voice control interface engine is then to
`produce, through a loudspeaker and in response to the
`speech command, an audible output at a first output level.
`Illustrating by example, the Voice control interface engine
`may be operable to receive voice commands from a user
`standing two, three, or more feet away and then deliver the
`audible output to a loudspeaker at a level sufficient for the
`user to hear it from the same distance.
`One or more processors, which can be operable with one
`or more sensors, also function within the electronic device.
`In one embodiment, the one or more processors are operable
`with the one or more sensors to detect a predefined user
`input. Examples of the predefined user input include a
`predefined motion of the electronic device, a predefined
`gesture input, detection of a user's head becoming proxi
`mately located with the electronic device, or actuation of a
`user actuation target of the electronic device. In one embodi
`ment, when this occurs the one or more processors are
`operable to transition the Voice control interface engine to a
`second mode operative to receive the speech command from
`a second distance and produce, in response to the speech
`command, the audible output at a second output level. In one
`embodiment the second distance is less than the first dis
`tance and the second output level is less than the first output
`level. This second mode of operation, i.e., the discreet mode,
`allows the user to deliver voice commands with a much
`lower volume and receive responses at a level that others
`will not overhear. For instance, when in the discreet mode of
`operation, the user may whisper voice commands to the
`microphone, while hearing audible responses from an ear
`piece speaker rather than a loudspeaker.
`Turning now to FIG. 1, illustrated therein is a prior art
`electronic device 100 configured with a voice controlled
`user interface. One example of Such a prior art electronic
`
`55
`
`45
`
`50
`
`60
`
`65
`
`4
`device 100 is described US Published Patent Application
`No. 2014/0278443 to Gunn et al., which is incorporated
`herein by reference. Essentially, the prior art electronic
`device 100 includes a voice controlled user interface to
`receive a speech command phrase, identify a speech com
`mand phrase segment, and perform a control operation in
`response to the segment. In one embodiment, the control
`operation is the delivery of an audible response.
`FIG. 1 illustrates a use case that highlights an unforeseen
`problem associated with the otherwise incredibly convenient
`functionality offered by the voice controlled user interface.
`A user 101 delivers, in a normal conversational tone, a voice
`command 102 that asks, “How tall is the Sears Tower?” The
`prior art electronic device 100, using its voice controlled
`user interface and one or more other applications, retrieves
`the answer from a remote source and announces the answer
`with an audible output 103. In this case, the prior art
`electronic device announces, at a volume level Sufficient for
`the user 101 to hear it from several feet away, “Fourteen
`hundred and fifty one feet.”
`Two things are of note in FIG. 1. First, due to the
`convenience offered by the voice controlled user interface,
`the user 101 has been able to determine a trivia fact simply
`by speaking. The user 101 did not have to access a book,
`computer, or other person. The prior art electronic device
`100 simply found the answer and delivered it.
`Second, the audible output 103 was delivered at an output
`level that was sufficient for the user 101 to hear it from a
`distance away. Embodiments of the disclosure contemplate
`that if the user 101 was able to hear it from a few feet away,
`so too would a passerby or eavesdropper. Embodiments of
`the disclosure contemplate that the user 101 may not care if
`a third party listens in on the answer to the question, “How
`tall is the Sears Tower?” However, if the user's voice
`command had been “play me my voice mail.” the user 101
`may not want a third party to hear their doctor giving a
`medical diagnosis. Similarly, the user 101 may not want a
`third party to hear their significant other breaking up with
`them or using expletives after they forgot an anniversary.
`Advantageously, embodiments of the disclosure provide an
`apparatus and method for transitioning a voice control
`interface engine into a second, discreet mode of operation
`where the medical diagnosis, breakup, or expletives are
`heard only by the person for whom they were intended.
`Turning now to FIG. 2, illustrated therein is one explana
`tory electronic device 200 configured in accordance with
`one or more embodiments of the disclosure. The electronic
`device 200 of FIG. 2 is a portable electronic device, and is
`shown as a Smartphone for illustrative purposes. However,
`it should be obvious to those of ordinary skill in the art
`having the benefit of this disclosure that other electronic
`devices may be substituted for the explanatory smartphone
`of FIG. 1. For example, the electronic device 200 could
`equally be a palm-top computer, a tablet computer, a gaming
`device, a media player, or other device.
`This illustrative electronic device 200 includes a display
`202, which may optionally be touch-sensitive. In one
`embodiment where the display 202 is touch-sensitive, the
`display 202 can serve as a primary user interface 211 of the
`electronic device 200. Users can deliver user input to the
`display 202 of such an embodiment by delivering touch
`input from a finger, stylus, or other objects disposed proxi
`mately with the display. In one embodiment, the display 202
`is configured as an active matrix organic light emitting diode
`(AMOLED) display. However, it should be noted that other
`
`-9-
`
`
`
`US 9,489,172 B2
`
`10
`
`15
`
`5
`types of displays, including liquid crystal displays, would be
`obvious to those of ordinary skill in the art having the benefit
`of this disclosure.
`The explanatory electronic device 200 of FIG. 2 includes
`a housing 201. In one embodiment, the housing 201 includes
`two housing members. A front housing member 227 is
`disposed about the periphery of the display 202 in one
`embodiment. A rear-housing member 228 forms the back
`side of the electronic device 200 in this illustrative embodi
`ment and defines a rear major face of the electronic device.
`Features can be incorporated into the housing members
`227,228. Examples of such features include an optional
`camera 229 or an optional speaker port 232 disposed atop a
`loudspeaker. These features are shown being disposed on the
`rear major face of the electronic device 200 in this embodi
`ment, but could be located elsewhere. In this illustrative
`embodiment, a user interface component, which may be a
`button 214 or touch sensitive Surface, can also be disposed
`along the rear-housing member 228.
`In one embodiment, the electronic device 200 includes
`one or more connectors 212.213, which can include an
`analog connector, a digital connector, or combinations
`thereof. In this illustrative embodiment, connector 212 is an
`analog connector disposed on a first edge, i.e., the top edge,
`of the electronic device 200, while connector 213 is a digital
`connector disposed on a second edge opposite the first edge,
`which is the bottom edge in this embodiment.
`A block diagram schematic 215 of the electronic device
`200 is also shown in FIG. 2. In one embodiment, the
`electronic device 200 includes one or more processors 216.
`In one embodiment, the one or more processors 216 can
`include an application processor and, optionally, one or
`more auxiliary processors. One or both of the application
`processor or the auxiliary processor(s) can include one or
`more processors. One or both of the application processor or
`the auxiliary processor(s) can be a microprocessor, a group
`of processing components, one or more ASICs, program
`mable logic, or other type of processing device. The appli
`cation processor and the auxiliary processor(s) can be oper
`able with the various components of the electronic device
`200. Each of the application processor and the auxiliary
`processor(s) can be configured to process and execute
`executable software code to perform the various functions of
`the electronic device 200. A storage device, such as memory
`45
`218, can optionally store the executable software code used
`by the one or more processors 216 during operation.
`In this illustrative embodiment, the electronic device 200
`also includes a communication circuit 225 that can be
`configured for wired or wireless communication with one or
`more other devices or networks. The networks can include
`a wide area network, a local area network, and/or personal
`area network. Examples of wide area networks include
`GSM, CDMA, W-CDMA, CDMA-2000, iDEN, TDMA, 2.5
`Generation 3GPP GSM networks, 3rd Generation 3GPP
`WCDMA networks, 3GPP Long Term Evolution (LTE)
`networks, and 3GPP2 CDMA communication networks,
`UMTS networks, E-UTRA networks, GPRS networks,
`iDEN networks, and other networks.
`The communication circuit 225 may also utilize wireless
`technology for communication, Such as, but are not limited
`to, peer-to-peer or ad hoc communications such as HomeRF.
`Bluetooth and IEEE 802.11 (a, b, g, or n); and other forms of
`wireless communication Such as infrared technology. The
`communication circuit 225 can include wireless communi
`cation circuitry, one of a receiver, a transmitter, or trans
`ceiver, and one or more antennas 226.
`
`55
`
`6
`In one embodiment, the one or more processors 216 can
`be responsible for performing the primary functions of the
`electronic device 200. For example, in one embodiment the
`one or more processors 216 comprise one or more circuits
`operable with one or more user interface devices, which can
`include the display 202, to present presentation information
`to a user. The executable software code used by the one or
`more processors 216 can be configured as one or more
`modules 220 that are operable with the one or more pro
`cessors 216. Such modules 220 can store instructions, con
`trol algorithms, logic steps, and so forth.
`In one embodiment, one or more proximity sensors 208
`can be operable with the one or more processors 216. In one
`embodiment, the one or more proximity sensors 208 include
`one or more signal receivers and signal transmitters. The
`signal transmitters emit electromagnetic or infrared signals
`that reflect off of objects to the signal receivers, thereby
`detecting an object proximately located with the electronic
`device 200. It should be noted that each of the proximity
`sensors 208 can be any one of various types of proximity
`sensors, such as but not limited to, capacitive, magnetic,
`inductive, optical/photoelectric, laser, acoustic/sonic, radar
`based, Doppler-based, thermal, and radiation-based proxim
`ity sensors. Other types of sensors will be obvious to those
`of ordinary skill in the art.
`In one embodiment, one or more proximity sensors 208
`can be infrared proximity sensors that transmit a beam of
`infrared light that reflects from a nearby object and is
`received by a corresponding signal receiver. The proximity
`sensors 208 can be used, for example, to compute the
`distance to any nearby object from characteristics associated
`with the reflected signals. The reflected signals are detected
`by the corresponding signal receiver, which may be an
`infrared photodiode used to detect reflected light emitting
`diode (LED) light, respond to modulated infrared signals,
`and/or perform triangulation of received infrared signals.
`The reflected signals can also be used to receive user input
`from a user delivering touch or gesture input to the elec
`tronic device 200.
`In one embodiment, the one or more processors 216 may
`generate commands based on information received from one
`or more proximity sensors 208. The one or more processors
`216 may generate commands based upon information
`received from a combination of the one or more proximity
`sensors 208 and one or more other sensors 209. Alterna
`tively, the one or more processors 216 can generate com
`mands based upon information received from the one or
`more other sensors 209 alone. Thus, the one or more
`processors 216 may process the received information alone
`or in combination with other data, Such as the information
`stored in the memory 218.
`The one or more other sensors 209 may include a micro
`phone 240, a earpiece speaker 241, a second loudspeaker
`(disposed beneath speaker port 232), and a mechanical input
`component such as button 214. The one or more other
`sensors 209 may also include key selection sensors, a touch
`pad sensor, a touch screen sensor, a capacitive sensor, and
`one or more Switches. Touch sensors may used to indicate
`whether any of the user actuation targets 204.205.206.207
`on present on the display 202 are being actuated. Alterna
`tively, touch sensors in the housing 201 can be used to
`determine whether the electronic device 200 is being
`touched at side edges, thus indicating whether certain ori
`entations or movements of the electronic device 200 are
`being performed by a user. The other sensors 209 can also
`include Surface/housing capacitive sensors, audio sensors,
`and video sensors (such as a camera).
`
`25
`
`30
`
`35
`
`40
`
`50
`
`60
`
`65
`
`-10-
`
`
`
`US 9,489,172 B2
`
`10
`
`15
`
`7
`The other sensors 209 can also include motion detectors
`242. Such as accelerometers or gyroscopes. For example, an
`accelerometer may be embedded in the electronic circuitry
`of the electronic device 200 to show vertical orientation,
`constant tilt and/or whether the device is stationary. The
`motion detectors 242 are also operable to detect movement
`of the electronic device 200 by a user. In one or more
`embodiments, the other sensors 209 and the motion detec
`tors 242 can each be used as a gesture detection device.
`Illustrating by example, in one embodiment a user can
`delivergesture input by moving a hand or arm in predefined
`motions in close proximity to the electronic device 200. In
`another embodiment, the user can deliver gesture input by
`touching the display 202. In yet another embodiment, a user
`can deliver gesture input by shaking or otherwise deliber
`ately moving the electronic device 200. Other modes of
`delivering gesture input will be obvious to those of ordinary
`skill in the art having the benefit of this disclosure.
`Other components operable with the one or more proces
`sors 216 can include output components 243 Such as Video
`outputs, audio outputs 244, and/or mechanical outputs.
`Examples of output components include audio outputs 244
`Such as speaker port 232, earpiece speaker 241, or other
`alarms and/or buZZers and/or a mechanical output compo
`nent such as vibrating or motion-based mechanisms.
`In one embodiment, the one or more processors 216 are
`operable to change again on the microphone 240 such that
`voice input from a user can be received from different
`distances. For example, in one embodiment the one or more
`processors 216 are operable to operate the microphone 240
`in a first mode at a first gain sensitivity so that voice
`commands from a user can be received from more than one
`foot away from the device. If the electronic device 200 is a
`Smartphone, for instance, the one or more processors 216
`may operate the microphone 240 in a first mode at a first gain
`sensitivity to receive voice input from a user when operating
`in a speakerphone mode for example. Similarly, when the
`electronic device 200 is configured with a voice control
`interface engine 245, the one or more processors 216 may
`40
`operate the microphone 240 in a first mode at a first gain
`sensitivity to receive voice input from a user several feet
`away. This would cause the microphone 240 to function as
`did the microphone of prior art electronic device (100) of
`FIG. 1 in which voice commands (102) could be received
`45
`from several feet away.
`In one embodiment, the one or more processors 216 may
`further operate the microphone 240 in a second mode at a
`second gain sensitivity to receive Voice input from a user. In
`one embodiment, the second gain sensitivity is less than the
`first gain sensitivity. This results in Voice input being
`received from closer distances at lower levels. If the elec
`tronic device 200 is a smartphone, for instance, the one or
`more processors 216 may operate the microphone 240 in a
`second mode at a second gain sensitivity to receive voice
`input from a user when the electronic device 200 is placed
`against the user's face. As the microphone 240 is very close
`to the user's mouth, this second, lesser gain sensitivity can
`be used to capture lower volume voice input from the user.
`Similarly, when the electronic device 200 is configured with
`a voice control interface engine 245, the one or more
`processors 216 may operate the microphone 240 in a second
`mode at a second gain sensitivity to receive voice input from
`a user's mouth that may be only an inch (or less) from the
`microphone 240. Not only can this assist in keeping third
`parties and eaves droppers from hearing a conversation
`when operating in the discreet mode of operation, but it can
`
`50
`
`8
`be of assistance in noisy environments since the user is
`delivering voice commands from a close proximity to the
`microphone 240.
`In a similar fashion, the one or more processors 216 can
`operate one or both of the earpiece speaker 241 and/or the
`loudspeaker under speaker port 232 in either a first mode or
`a second mode. In one embodiment, the one or more
`processors 216 are operable to change a gain of either
`speaker such that audible output from the electronic device
`200 can be heard by a user at different distances. For
`example, in one embodiment the one or more processors 216
`are operable to operate one or both of the earpiece speaker
`241 and/or the loudspeaker under speaker port 232 in a first
`mode at a first gain so that audible output is produced at a
`first output level. In one embodiment, the first output level
`is a volume sufficient that the audible output can be heard
`from more than one foot away from the device. If the
`electronic device 200 is a smartphone, for instance, the one
`or more processors 216 may operate one or both of the
`earpiece speaker 241 and/or the loudspeaker under speaker
`port 232 in a first mode at a first gain to produce output at
`a louder Volume when operating in a speakerphone mode for
`example. Similarly, when the electronic device 200 is con
`figured with a voice control interface engine 245, the one or
`more processors 216 may operate the one or both of the
`earpiece speaker 241 and/or the loudspeaker under speaker
`port 232 in a first mode at a first gain to produce audible
`output at a first output level so that a user can hear the
`audible output from a user several feet away. This would
`cause the one or both of the earpiece speaker 241 and/or the
`loudspeaker under speaker port 232 to function as did the
`loudspeaker of prior art electronic device (100) of FIG. 1 in
`wh