throbber
US 20150195641A1
`
`a9y United States
`
`a2y Patent Application Publication o) Pub. No.: US 2015/0195641 A1l
`
`Di Censo et al. 43) Pub. Date: Jul. 9, 2015
`(54) SYSTEM AND METHOD FOR USER (52) U.S.CL
`CONTROLLABLE AUDITORY CPC ... HO4R 1/1083 (2013.01); HO4R 2430/00
`ENVIRONMENT CUSTOMIZATION (2013.01)
`(71) Applicant: HARMAN INTERNATIONAL
`INDUSTRIES, INC., Stamford, CT 7 ABSTRACT
`(US) A method for generating an auditory environment for a user
`(72) Tnventors: Davide Di Censo, San Mateo, CA (US); may include receiving a signal representing an ambient audi-
`" Stefan Marti, Oai{land, CA (EJS); Aj ay’ tory environment 0(1; the?fuser,1 processin% thelz si%pal ufsing a
`Juneja, Mountain View, CA (US) MICToprocessor to1 .entl y gt east one of a plura 1ty 0 types
`of'sounds in the ambient auditory environment, receiving user
`(73) Assignee: HARMAN INTERNATIONAL preferences corresponding to each of the plurality of types of
`INDUSTRIES, INC., Stamford, CT sounds, modifying the signal for each type of sound in the
`(Us) ambient auditory environment based on the corresponding
`user preference, and outputting the modified signal to at least
`(21) Appl. No.: 14/148,689 one speaker to generate the auditory environment for the user.
`(22) Filed: Jan. 6, 2014 A.system may 1nclud§ a wearable device having speake.rs,
`microphones, and various other sensors to detect a noise
`Publication Classification context. A microprocessor processes ambient sounds and
`generates modified audio signals using attenuation, amplifi-
`(51) Int.Cl cation, cancellation, and/or equalization based on user pref-
`HO4R 1710 (2006.01) erences associated with particular types of sounds.
`
`600
`
`CANCEL ME
`
`CANCELNOISE
`
`CANCEL VOICES
`
`CANCELALERTS
`
`\
`1
`o p)
`~o
`
`\
`
`]
`D
`i N
`
`Exhibit 1005
`Page 01 of 16
`
`Samsung v. Zophonos
`IPR2026-00083
`Exhibit 1005
`
`
`
`
`
`
`
`
`
`Patent Application Publication Jul. 9,2015 Sheet1 of § US 2015/0195641 A1
`
`108
`
`=2
`
`146
`
`|_-106
`
`ALERTS
`SIRENS
`PA
`WEATHER
`
`4
`[3)
`144
`
`ADVERTISING
`FIG. 1
`
`=r
`[ )
`=2
`
`Exhibit 1005
`Page 02 of 16
`
`
`
`
`
`
`
`
`Patent Application Publication
`
`M0~
`
`Jul. 9,2015 Sheet2 of 5
`
`IN-EAR DEVICES REPRODUCE
`ENVIRONMENT WITHOUT
`MODIFICATIONS
`
`Y
`
`USER SETS AUDITORY
`PREFERENCES
`
`|22
`
`Y
`
`THE PREFERENCES GET
`COMMUNICATED TO THE
`IN-EAR DEVICES
`
`230
`
`Y
`
`Y
`
`) 244
`
`THE IN-EAR DEVICES APPLY
`THE USER'S PREFERENCES
`THROUGH:
`
`|_—~240
`
`246 248
`
`\
`
`\
`SOUNDS
`CANCELLATION
`
`SOUNDS
`ADDITION
`
`SOUNDS
`ENHANCEMENT
`
`SOUNDS
`LOWERING
`
`—
`
`NO
`
`Exhibit 1005
`Page 03 of 16
`
`250
`DID THE USER'S
`PREFERENCES CHANGE? VES
`
`US 2015/0195641 A1
`
`
`
`
`
`
`
`
`
`Patent Application Publication Jul. 9,2015 Sheet 3 of § US 2015/0195641 A1
`
`300
`
`/ 340
`350 2 \
`DATABASE 330 ops AUDIO
`(LBRARY) \ / URCE
`|_~334
`352 CONTEXT [—1 GYRO 342
`SENSORS
`~_ 3%
`INTERNET : ACCEL w
`v
`MIC(S) - 0P ~ AMP(S) | SPEAKER(S)
`
`/ [ 4 / /
`312 e S 316
`
`320~ s
`
`2 NS
`
`a4~ || PREFERENCES
`
`USER INTERFACE
`3
`328:\| 24 |
`~ MEMORY |
`MOBILEMWEARABLE DEVICE
`
`FIG. 3
`
`Exhibit 1005
`Page 04 of 16
`
`
`
`
`
`
`
`
`US 2015/0195641 A1
`
`Jul. 9,2015 Sheet 4 of 5
`
`Patent Application Publication
`
`OINYYIANE [-007
`
`A
`
`| LW
`
`m o | SONNOSLSONOD [N0sp \
`
`i g/q N Y I IOVITE Y,
`_ i | STNRE
`i “ aNnoS e
`
`i (013 HoLd 03) JONYHO T 2 VI e
`
`| aNNOS o\ AR
`
`i NI HUIM 3OV 13y “ NN
`
`i QNOSTONYD: (1 Q1o SN = !
`
`i i ONNOS T1NIS - L | suosnas
`“ TITT 3SY00- i , L 1| LxaNod
`
`i TINITISVRONI NN, WU o
`
`“ ANW¥ILI0 ONNOS HOY3 403 i N
`
`m SONNOS ALave3s [>06r | A
`w '} m
`
`m IN3IS AHOLINY m
`
`i SHLN SINNOS > SINnoS IZAMNY N0zr |
`
`| 0319313040 81 |
`
`m ity I dsq |
`
`Exhibit 1005
`Page 05 of 16
`
`
`
`
`
`
`
`
`Patent Application Publication Jul. 9,2015 Sheet 5 of § US 2015/0195641 A1
`
`500
`552 554 560
`/ /
`55— OlfF LQW s REAL LO_UD
`' N%SE
`5101 | 544
`fvo}Jg;IEs
`530-1 /
`N/5 6
`ME |
`5201 N i
`R 48
`ALERTS
`540~ NE
`600
`
`CANCELNOISE |~/ 1810
`
`CANGELVOICES | |-1612
`
`\
`
`]
`(@ >)
`e
`
`CANCELME N
`
`CANCELALERTS
`
`FIG. 6
`
`Exhibit 1005
`Page 06 of 16
`
`
`
`
`
`
`
`
`US 2015/0195641 Al
`
`SYSTEM AND METHOD FOR USER
`CONTROLLABLE AUDITORY
`ENVIRONMENT CUSTOMIZATION
`
`TECHNICAL FIELD
`
`[0001] This disclosure relates to systems and methods for a
`user controllable auditory environment using wearable
`devices, such as headphones, speakers, or in-ear devices, for
`example, to selectively cancel, add, enhance, and/or attenuate
`auditory events for the user.
`
`BACKGROUND
`
`[0002] Various products have been designed with the goal
`of eliminating unwanted sounds or “auditory pollution” so
`that users can listen to a desired audio source or substantially
`eliminate noises from surrounding activities. More and more
`objects, events, and situations continue to generate auditory
`information of various kinds Some of this auditory informa-
`tion is welcomed, but much of it may be perceived as distract-
`ing, unwanted, and irrelevant. One’s natural ability to focus
`on certain sounds and ignore others is continually challenged
`and may decrease with age.
`
`[0003] Various types of noise cancelling headphones and
`hearing aid devices allow users some control or influence over
`their auditory environment. Noise cancelling systems usually
`cancel or enhance the overall sound field, but do not distin-
`guish between various types of sounds or sound events. In
`other words, the cancellation or enhancement is not selective
`and cannot be finely tuned by the user. While some hearing
`aid devices can be tuned for use in certain environments and
`settings, those systems often do not provide desired flexibility
`and fine grained dynamic control to influence the user’s audi-
`tory environment. Similarly, in-ear monitoring devices, such
`as worn by artists on stage, may be fed with a very specific
`sound mix prepared by a monitor mixing engineer. However,
`this is a manual process, and uses only additive mixing.
`
`SUMMARY
`
`[0004] Embodiments according to the present disclosure
`include a system and method for generating an auditory envi-
`ronment for a user that may include receiving a signal repre-
`senting an ambient auditory environment of the user, process-
`ing the signal using a microprocessor to identify at least one
`of a plurality of types of sounds in the ambient auditory
`environment, receiving user preferences corresponding to
`each of the plurality of types of sounds, modifying the signal
`for each type of sound in the ambient auditory environment
`based on the corresponding user preference, and outputting
`the modified signal to at least one speaker to generate the
`auditory environment for the user. In one embodiment, a
`system for generating an auditory environment for a user
`includes a speaker, a microphone, and a digital signal proces-
`sor configured to receive an ambient audio signal from the
`microphone representing an ambient auditory environment of
`the user, process the ambient audio signal to identify at least
`one of a plurality of types of sounds in the ambient auditory
`environment, modify the at least one type of sound based on
`received user preferences; and output the modified sound to
`the speaker to generate the auditory environment for the user.
`[0005] Various embodiments may include receiving a
`sound signal from an external device in communication with
`the microprocessor, and combining the sound signal from the
`external device with the modified types of sound. The sound
`
`Exhibit 1005
`Page 07 of 16
`
`Jul. 9, 2015
`
`signal from an external device may be wirelessly transmitted
`and received. The external device may communicate over a
`local or wide area network, such as the internet, and may
`include a database having stored sound signals of different
`types of sounds that may be used in identifying sound types or
`groups. Embodiments may include receiving user prefer-
`ences wirelessly from a user interface generated by a second
`microprocessor, which may be embedded in a mobile device,
`such as a cell phone, for example. The user interface may
`dynamically generate user controls to provide a context-sen-
`sitive user interface in response to the ambient auditory envi-
`ronment of the user. As such, controls may only be presented
`where the ambient environment includes a corresponding
`type or group of sounds. Embodiments may include one or
`more context sensors to identify expected sounds and associ-
`ated spatial orientation relative to the user within the audio
`environment. Context sensors may include a GPS sensor,
`accelerometer, or gyroscope, for example, in addition to one
`or more microphones.
`
`[0006] Embodiments of the disclosure may also include
`generating a context-sensitive user interface by displaying a
`plurality of controls corresponding to selected sounds or
`default controls for anticipated sounds in the ambient audi-
`tory environment. Embodiments may include various types
`of user interfaces generated by the microprocessor or by a
`second microprocessor associated with a mobile device, such
`as a cell phone, laptop computer, or tablet computer, wrist
`watch, or other wearable accessory or clothing, for example.
`In one embodiment, the user interface captures user gestures
`to specify at least one user preference associated with one of
`the plurality of types of sounds. Other user interfaces may
`include graphical displays on touch-sensitive screens, such as
`slider bars, radio buttons or check boxes, etc. The user inter-
`face may be implemented using one or more context sensors
`to detect movements or gestures of the user. A voice-activated
`user interface may also be provided with voice-recognition to
`provide user preferences or other system commands to the
`MiCroprocessor.
`
`[0007] The received ambient audio signal may be pro-
`cessed by dividing the signal into a plurality of component
`signals each representing one of the plurality of types of
`sounds, modifying each of the component signals for each
`type of sound in the ambient auditory environment based on
`the corresponding user preference, generating a left signal
`and a right signal for each of the plurality of component
`signals based on a corresponding desired spatial position for
`the type of sound within the auditory environment of the user,
`combining the left signals into a combined left signal, and
`combining the right signals into a combined right signal. The
`combined left signal is provided to a first speaker and the
`combined right signal is provided to a second speaker. Modi-
`fying the signal may include adjusting signal amplitude and/
`or frequency spectrum associated with one or more compo-
`nent sound types by attenuating the component signal,
`amplifying the component signal, equalizing the component
`signal, cancelling the component signal, and/or replacing one
`type of sound with another type of sound in the component
`signal. Cancelling a sound type or group may be performed
`by generating an inverse signal having substantially equal
`amplitude and substantially opposite phase relative to the one
`type or group of sound.
`
`[0008] Various embodiments of a system for generating an
`auditory environment for a user may include a speaker, a
`microphone, and a digital signal processor configured to
`
`
`
`
`
`
`
`
`US 2015/0195641 Al
`
`receive an ambient audio signal from the microphone repre-
`senting an ambient auditory environment of the user, process
`the ambient audio signal to identify at least one of a plurality
`of types of sounds in the ambient auditory environment,
`modify the at least one type of sound based on received user
`preferences; and output the modified sound to the speaker to
`generate the auditory environment for the user. The speaker
`and the microphone may be disposed within an ear bud con-
`figured for positioning within an ear of the user, or within ear
`cups configured for positioning over the ears of a user. The
`digital signal processor or other microprocessor may be con-
`figured to compare the ambient audio signal to a plurality of
`sound signals to identify the at least one type of sound in the
`ambient auditory environment.
`
`[0009] Embodiments also include a computer program
`product for generating an auditory environment for a user that
`includes a computer readable storage medium having stored
`program code executable by a microprocessor to process an
`ambient audio signal to separate the ambient audio signal into
`component signals each corresponding to one of a plurality of
`groups of sounds, modify the component signals in response
`to corresponding user preferences received from a user inter-
`face, and combine the component signals after modification
`to generate an output signal for the user. The computer read-
`able storage medium may also include code to receive user
`preferences from a user interface having a plurality of con-
`trols selected in response to the component signals identified
`in the ambient audio signal, and code to change at least one of
`an amplitude or a frequency spectrum of the component sig-
`nals in response to the user preferences.
`
`[0010] Various embodiments may have associated advan-
`tages. For example, embodiments of a wearable device or
`related method may improve hearing capabilities, attention,
`and/or concentration abilities of a user by selectively process-
`ing different types or groups of sounds based on different user
`preferences for various types of sounds. This may result in
`lower cognitive load for auditory tasks and provide stronger
`focus when listening to conversations, music, talks, or any
`kind of sounds. Systems and methods according to the present
`disclosure may allow the user to enjoy only the sounds that
`he/she desires to hear from the auditory environment,
`enhance his/her auditory experience with functionalities like
`beautification of sounds by replacing noise or unwanted
`sounds with nature sounds or music, for example, and real-
`time translations during conversations, stream audio and
`phone conversations directly to his/her ears and be freed from
`the need of holding a device next to his/her ear, and add any
`additional sounds (e.g. music or voice recordings) to his/her
`auditory field, for example.
`
`[0011] Various embodiments may allow the user to receive
`audio signals from an external device over a local or wide area
`network. This facilitates context-aware advertisements that
`may be provided to a user, as well as context-aware adjust-
`ments to the user interface or user preferences. The user may
`be given complete control over their personal auditory envi-
`ronment, which may result in reduced information overload
`and reduced stress.
`
`[0012] The above advantages and other advantages and
`features of the present disclosure will be readily apparent
`from the following detailed description of the preferred
`embodiments when taken in connection with the accompa-
`nying drawings.
`
`Exhibit 1005
`Page 08 of 16
`
`Jul. 9, 2015
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0013] FIG. 1 illustrates operation of a representative
`embodiment of a system or method for generating a custom-
`ized or personalized auditory environment for a user;
`
`[0014] FIG. 2 is a flowchart illustrating operation of a rep-
`resentative embodiment of a system or method for generating
`a user controllable auditory environment;
`
`[0015] FIG. 3 is a block diagram illustrating a representa-
`tive embodiment of a system for generating an auditory envi-
`ronment for a user based on user preferences;
`
`[0016] FIG. 4 is a block diagram illustrating functional
`blocks of'a system for generating an auditory environment for
`a user of a representative embodiment; and
`
`[0017] FIGS. 5 and 6 illustrate representative embodiments
`of'a user interface having controls for specifying user prefer-
`ences associated with particular types or groups of sounds.
`
`DETAILED DESCRIPTION
`
`[0018] Embodiments of the present disclosure are
`described herein. It is to be understood, however, that the
`disclosed embodiments are merely examples and other
`embodiments can take various and alternative forms. The
`figures are not necessarily to scale; some features could be
`exaggerated or minimized to show details of particular com-
`ponents. Therefore, specific structural and functional details
`disclosed herein are not to be interpreted as limiting, but
`merely as a representative basis for teaching one skilled in the
`art to variously employ the teachings of the disclosure. As
`those of ordinary skill in the art will understand, various
`features illustrated and described with reference to any one of
`the figures may be combined with features illustrated in one
`or more other figures to produce embodiments that are not
`explicitly illustrated or described. The combinations of fea-
`tures illustrated provide representative embodiments for typi-
`cal applications. Various combinations and modifications of
`the features consistent with the teachings of this disclosure,
`however, could be desired for particular applications or
`implementations. Some of the description may specify a
`number of components that may be used or a spatial reference
`in a drawing such as above, below, inside, outside, etc. Any
`such spatial references, references to shapes, or references to
`the numbers of components that may be utilized are merely
`used for convenience and ease of illustration and description
`and should not be construed in any limiting manner.
`
`[0019] FIG. 1 illustrates operation of a representative
`embodiment of a system or method for generating a user
`controllable auditory environment for a user that may be
`personalized or customized in response to user preferences
`for particular types or groups of sounds. System 100 includes
`a user 120 surrounded by an ambient auditory environment
`including a plurality of types or groups of sounds. In the
`representative embodiment of FIG. 1, representative sound
`sources and associated types or groups of sounds are repre-
`sented by traffic noise 102, a voice from a person 104 talking
`to user 120, various types of alerts 106, voices from a crowd
`or conversations 108 either not directed to user 120 or in a
`different spatial location than voice from person 104, nature
`sounds 110, and music 112. The representative types or
`groups of sound or noise (which may include any undesired
`sounds) illustrated in FIG. 1 are representative only and are
`provided as non-limiting examples. The auditory environ-
`ment or ambient sounds relative to user 120 will vary as the
`user moves to different locations and may include tens or
`
`
`
`
`
`
`
`
`US 2015/0195641 Al
`
`hundreds of other types of sounds or noises, some of which
`are described in greater detail with reference to particular
`embodiments below.
`
`[0020] Various sounds, such as those represented in FIG. 1,
`may be stored in a database and accessed to be added or
`inserted into the auditory environment of the user in response
`to user preferences as described in greater detail below. Simi-
`larly, various signal characteristics of representative or aver-
`age sounds of a particular sound group or sound type may be
`extracted and stored in a database. These signal characteris-
`tics of representative or average sounds of a particular sound
`group or sound type may be used as a signature to compare to
`sounds from a current ambient auditory environment to iden-
`tify the type of sound or sound group within the ambient
`environment. One or more databases of sounds and/or sound
`signal characteristics may be stored on-board or locally
`within system 100 or may be accessed over a local or wide
`area network, such as the internet. Sound type signatures or
`profiles may be dynamically loaded or changed based on a
`current position, location, or context of user 120. Alterna-
`tively, one or more sound types or profiles may be down-
`loaded or purchased by user 120 for use in replacing undes-
`ired sounds/noises, or for augmenting the auditory
`environment.
`
`[0021] Similar to the stored sounds or representative sig-
`nals described above, alerts 106 may originate within the
`ambient auditory environment of user 120 and be detected by
`an associated microphone, or may be directly transmitted to
`system 100 using a wireless communication protocol such as
`Wi-Fi, Bluetooth, or cellular protocols. For example, a
`regional weather alert or Amber alert may be transmitted and
`received by system 100 and inserted or added to the auditory
`environment of the user. Depending on the particular imple-
`mentation, some alerts may be processed based on user pref-
`erences, while other alerts may not be subject to various types
`of user preferences, such as cancellation or attenuation, for
`example. Alerts may include context-sensitive advertise-
`ments, announcements, or information, such as when attend-
`ing a concert, sporting event, or theater, for example.
`
`[0022] As also shown in FIG. 1, system 100 includes a
`wearable device 130 that includes at least one microphone, at
`least one speaker, and a microprocessor-based digital signal
`processor (DSP) as illustrated and described in greater detail
`with reference to FIGS. 2-6. Wearable device 130 may be
`implemented by headphones or ear buds 134 that each contain
`an associated speaker and one or more microphones or trans-
`ducers, which may include an ambient microphone to detect
`ambient sounds within the ambient auditory environment,
`and an internal microphone used in a closed loop feedback
`control system for cancellation of user selected sounds.
`Depending on the particular embodiment, the ear pieces 134
`may be optionally connected by a headband 132, or may be
`configured for positioning around a respective ear of user 120.
`In one embodiment, earpieces 134 are in-the-ear devices that
`partially or substantially completely seal the ear canal of user
`120 to provide passive attenuation of ambient sounds. In
`another embodiment, circumaural ear cups may be positioned
`over each ear to provide improved passive attenuation. Other
`embodiments may use supra-aural earpieces 134 that are
`positioned over the ear canal, but provide much less passive
`attenuation of ambient sounds.
`
`[0023] In one embodiment, wearable device 130 includes
`in-the-ear or intra-aural earpieces 134 and operates in a
`default or initial processing mode such that earpieces 134 are
`
`Exhibit 1005
`Page 09 of 16
`
`Jul. 9, 2015
`
`acoustically “transparent”, meaning the system 100 does not
`alter the auditory field or environment experienced by user
`120 relative to the current ambient auditory environment.
`Alternatively, system 100 may include a default mode that
`attenuates all sounds or amplifies all sounds from the ambient
`environment, or attenuates or amplifies particular frequencies
`of' ambient sounds similar to operation of more conventional
`noise cancelling headphones or hearing aids, respectively. In
`contrast to such conventional systems, user 120 may person-
`alize or customize his/her auditory environment using system
`100 by setting different user preferences applied to different
`types or groups of sounds selected by an associated user
`interface. User preferences are then communicated to the
`DSP associated with earpieces 134 through wired or wireless
`technology, such as Wi-Fi, Bluetooth, or similar technology,
`for example. The wearable device 130 analyzes the current
`audio field and sounds 102, 104, 106, 108, 110, and 112 to
`determine what signals to generate to achieve the user’s
`desired auditory scene. If the user changes preferences, the
`system updates the configuration to reflect the changes and
`apply them dynamically.
`
`[0024] Inone embodiment as generally depicted in FIG. 1,
`user 120 wears two in-ear or intra-aural devices 134 (one in
`each ear) that may be custom fitted or molded using technol-
`ogy similar to that used for hearing aids. Alternatively, stock
`sizes and/or removable tips or adapters may be used to pro-
`vide a good seal and comfortable fit for different users.
`Devices 134 may be implemented by highly miniaturized
`devices that fit completely in the ear canal, and are therefore
`practically invisible so they do not trigger any social stigma
`related to hearing aid devices. This may also facilitate a more
`comfortable and “integrated” feel for the user. The effort and
`habit of wearing such devices 134 may be comparable to
`contact lenses where the user inserts the devices 134 in the
`morning, and then may forget that s’he is wearing them.
`Alternatively, the user may keep the devices in at night to take
`advantage of the system’s functionalities while s/he is sleep-
`ing, as described with respect to representative use cases
`below.
`
`[0025] Depending on the particular implementation, ear-
`pieces 134 may isolate the user from the ambient auditory
`environment through passive and/or active attenuation or can-
`cellation, while, at the same time, reproducing only the
`desired sound sources either with or without enhancement or
`augmentation. Wearable device 130, which may be imple-
`mented within earpieces 134, may also be equipped with
`wireless communication (integrated Bluetooth or Wi-Fi) to
`connect with various external sound sources, an external user
`interface, or other similar wearable devices.
`
`[0026] Wearable device 130 may include context sensors
`(such as accelerometer, gyroscope, GPS, etc.; FIG. 3) to
`determine accurately the user’s location and/or head position
`and orientation. This allows the system to reproduce voices
`and sounds in the correct spatial position as they occur within
`the ambient auditory environment to not confuse the user. As
`an example, if a voice comes from the left of the user and he
`turns his head 45 degrees toward his left, the voice is placed in
`the correct location of the stereo panorama to not confuse the
`user’s perception. Alternatively, the system can optimize the
`stereo panorama of a conversation (for example, by spreading
`out the audio sources), which may lower the user’s cognitive
`load in certain situations. In one embodiment, user 120 may
`provide user preferences to artificially or virtually relocate
`particular sound sources. For example, a user listening to a
`
`
`
`
`
`
`
`
`US 2015/0195641 Al
`
`group conversation over a telephone or computer may posi-
`tion a speaker in a first location within the stereo panorama,
`and the audience in a second location within the stereo sound
`field or panorama. Similarly, multiple speakers could be vir-
`tually positioned at different locations with the auditory envi-
`ronment of the user as generated by wearable device 130.
`[0027] Although wearable device 130 is depicted with ear-
`pieces 134, other embodiments may include various compo-
`nents of system 100 contained within, or implemented by,
`different kinds of wearable devices. For example, the speak-
`ers and/or microphones may be disposed within a hat, scarf,
`shirt collar, jacket, hood, etc. Similarly, the user interface may
`be implemented within a separate mobile or wearable device,
`such as a smartphone, tablet, wrist watch, arm band, etc. The
`separate mobile or wearable device may include an associated
`microprocessor and/or digital signal processor that may also
`be used to provide additional processing power to augment
`the capabilities of the main system microprocessor and/or
`DSP.
`
`[0028] As also generally depicted by the block diagram of
`system 100 in FIG. 1, a user interface (FIGS. 5-6) allows user
`120 to create a personalized or customized auditory experi-
`ence by setting his/her preferences indicated by symbols 140,
`142, 144, 146, for associated sound types to indicate which
`sounds to amplify, cancel, add or insert, or attenuate, respec-
`tively. Other functions may be used to enhance a sound by
`providing equalization or filtering, selective attenuation or
`amplification of one or more frequencies of an associated
`sound, or replacing an undesired sound with a more pleasant
`sound (using a combination of cancellation and addition/
`insertion, for example). The changes made by user 120 using
`the user interface are communicated to the wearable device
`130 to control corresponding processing of input signals to
`create auditory output signals that implement the user pref-
`erences.
`
`[0029] For example, the user preference setting for cancel-
`lation represented at 142 may be associated with a sound
`group or type of “traffic noise” 102. Wearable device 130 may
`provide cancellation of this sound/noise in a manner similar
`to noise cancelling headphones by generating a signal having
`a substantially similar or equal amplitude that is substantially
`out of phase with the traffic noise 102. Unlike conventional
`noise cancelling headphones, the cancellation is selective
`based on the corresponding user preference 142. As such, in
`contrast to conventional noise cancelling headphones that
`attempt to reduce any/all noise, wearable device 130 cancels
`only the sound events that the user chooses not to hear, while
`providing the ability to further enhance or augment other
`sounds from the ambient auditory environment.
`
`[0030] Sounds within the ambient auditory environment
`can be enhanced as generally indicated by user preference
`140. Wearable device 130 may implement this type of feature
`in a similar manner as performed for current hearing aid
`technology. However, in contrast to current hearing aid tech-
`nology, sound enhancement is applied selectively in response
`to particular user preference settings. Wearable device 130
`may actively add or insert sounds to the user’s auditory field
`using one or more inward facing loudspeaker(s) based on a
`user preference as indicated at 144. This function may be
`implemented in a similar manner as used for headphones by
`playing back music or other audio streams (phone calls,
`recordings, spoken language digital assistant, etc.). Sound
`lowering or attenuation represented by user preference 146
`involves lowering the volume or amplitude of an associated
`
`Exhibit 1005
`Page 10 of 16
`
`Jul. 9, 2015
`
`sound, such as people talking as represented at 108. This
`effect may be similar to the effect of protective (passive) ear
`plugs, but applied selectively to only certain sound sources in
`response to user preferences of user 120.
`
`[0031] FIG. 2 is a simplified flowchart illustrating opera-
`tion of arepresentative embodiment of a system or method for
`generating a user controllable auditory environment. The
`flowchart of FIG. 2 generally represents functions or logic
`that may be performed by a wearable device as illustrated and
`described with reference to FIG. 1. The functions or logic
`may be performed by hardware and/or software executed by
`a programmed microprocessor. Functions implemented at
`least partially by software may be stored in a computer pro-
`gram product comprising a non-transitory computer readable
`storage medium having stored data representing code or
`instructions executable by a computer or processor to perform
`the indicated function(s). The computer-readable storage
`medium or media may be any of a number of known physical
`devices which utilize electric, magnetic, and/or optical
`devices to temporarily or persistently store executable
`instructions and associated data or information. As will be
`appreciated by one of ordinary skill in the art, the diagrams
`may represent any one or more of a number of known soft-
`ware programming languages and processing strategies such
`as event-driven, interrupt-driven, multi-tasking, multi-
`threading, and the like. As such, various features or functions
`illustrated may be performed in the sequence illustrated, in
`parallel, or in some cases omitted. Likewise, the order of
`processing is not necessarily required to achieve the features
`and advantages of various embodiments, but is provided for
`ease of illustration and description. Although not explicitly
`illustrated, one of ordinary skill in the art will recognize that
`one or more of the illustrated features or functions may be
`repeatedly performed.
`
`[0032] Block 210 of FIG. 2 represents a representative
`default or power-on mode for one embodiment with in-ear
`devices reproducing the ambient auditory environment with-
`out any modifications. Depending on the particular applica-
`tion and implementation of the wearable device, this may
`include active or powered reproduction of the ambient envi-
`ronment to the loudspeakers of the wearable device. For
`example, in embodiments having intra-aural earpieces with
`good sealing and passive attenuation, the default mode may
`receive various types of sounds using one or more ambient
`microphones, and generate corresponding signals for one or
`more speakers without significant signal or sound modifica-
`tions. For embodiments without significant passive attenua-
`tion, active ambient auditory environment reproduction may
`not be needed.
`
`[0033] Theuser sets auditory preferences as represented by
`block 220 via a user interface that may be implemented by the
`wearable device or by a second microprocessor-based device
`such as a smartphone, tablet computer, smartwatch, etc. Rep-
`resentative features of a representative user interface are illus-
`trated and described with reference to FIGS. 5 and 6. As
`previously described, user preferences represented by block
`220 may be associated with particular types, groups, or cat-
`egories of sounds and may include one or more modifications
`to the associated sound, such as canc

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket