throbber
A Frequency-Domain Analysis of Head-Motion Prediction
`
`Ronald Azuma§
`
`Gary Bishopl
`
`Hughes Research Laboratories
`
`University of North Carolina at Chapel Hill
`
`Abstract
`
`The use of prediction to eliminate or reduce the effects of sys-
`tem delays in Head-Mounted Display systems has been the subject
`of several recent papers. A variety of methods have been pro-
`posed but almost all the analysis has been empirical, making
`comparisons of results difficult and providing little direction to the
`designer of new systems.
`In this paper, we characterize the
`performance of two classes of head-motion predictors by
`analyzing them in the frequency domain. The first predictor is a
`polynomial extrapolation and the other is based on the Kalman
`filter. Our analysis shows that even with perfect, noise-free
`inputs,
`the error in predicted position grows rapidly with
`increasing prediction intervals and input signal frequencies.
`Given the spectra of the original head motion, this analysis
`estimates the spectra of the predicted motion, quantifying a
`predictor's performance on different systems and applications.
`Acceleration sensors are shown to be more useful to a predictor
`than velocity sensors. The methods described will enable
`designers to determine maximum acceptable system delay based
`on maximum tolerable error and the characteristics of user
`motions in the application.
`
`CR Categories and Subject Descriptors: 1.3.? [Computer
`Graphics]: Three-Dimensional Graphics and Realism -- virtual
`reality
`Additional Key Words and Phrases: Augmented Reality, delay
`compensation, spectral analysis, HlVlD
`
`1 Motivation
`
`A basic problem with systems that use Head-Mounted Displays
`(I-lMDs), for either Virtual Environment or Augmented Reality
`applications, is the end-to-end system delay. This delay exists be-
`cause the head tracker, scene generator, and communication links
`require time to perform their tasks, causing a lag between the
`measurement of head location and the display of the correspond-
`ing images inside the HMD. Therefore, those images are dis-
`played later than they should be, making the virtual objects appear
`to "lag behind" the user's head movements. This hurts the desired
`illusion of immersing a user inside a stable, compelling, 3-D vir-
`tual environment.
`
`One way to compensate for the delay is to predict future head
`locations. If the system can somehow determine the fi.1ture head
`position and orientation for the time when the images will be dis-
`played, it can use that future location to generate the graphic im-
`
`§ 3011 Malibu Canyon Road N18 RL96; Malibu, CA 90265
`(310) 31?-5151
`azuma@isI.hr|.hac.con1
`1' CB 3175 Sitterson Hall; Chapel Hill. NC 27599-31?5
`(919) 962-1886
`gb@cs.unc.edu
`
`Permission to make digitalfhard copy of part or all of this work
`for personal or classroom use is gained without fee provided
`that copies are not made or distributed for profit or commercial
`advantage, the copyright notice, the ‘title of the publication and
`its date appear, and notice is given that copying is by permission
`of ACM, Inc. To copy othenvise, to republish, to post on
`servers, or to redistribute to lists, requires prior specific
`permission andfor a fee.
`©1995 ACM-0-89791-701-4-f95l0D8..$3.50
`
`ages, instead of using the measured head location. Perfect predic-
`tions would eliminate the effects of system delay. Several predic-
`tors have been tried; examples include [2] [4] [5] [10] [1 1] [13]
`[l?] [18] [19] [20].
`Since prediction will not be perfect, evaluating how well pre-
`dictors perform is important. Virtually all evaluation so far has
`been empirical, where the predictors were run in simulation or in
`real time to generate the error estimates. Therefore, no simple
`formulas exist to generate the values in the error tables or the
`curves in the error graphs. Without such formulas, it is difficult to
`tell how prediction errors are affected by changes in system pa-
`rameters, such as the system delay or the input head motion. That
`makes it hard to compare one predictor against another or to eval-
`uate how well a predictor will work in a different HMD system or
`with a different application.
`
`2 Contribution
`
`This paper begins to address this need by characterizing the
`theoretical behavior of two types of head-motion predictors. By
`analyzing them in the frequency domain, we derive formulas that
`express the characteristics of the predicted signal as a function of
`the system delay and the input motion. These can be used to
`compare predictors and explore their performance as system pa-
`rameters change.
`Frequency-domain analysis techniques are not new; Section 3
`provides a quick introduction. The contribution here lies in the
`application of these techniques to this particular problem, the
`derivation of the formulas that characterize the behavior of the
`predictors, and the match between these frequency-domain results
`and equivalent time-domain results for collected motion data.
`To our knowledge, only one previous work characterizes head-
`motion prediction in the frequency domain [18]. This paper
`builds upon that work by deriving fonnulas for two other types of
`predictors and exploring how their performance changes as system
`parameters are modified.
`The two types of predictors were selected to cover most of the
`head-motion predictors that have been tried. Many predictors are
`based upon state variables: the current position x, velocity v, and
`sometimes acceleration a. Solving the differential equations un-
`der the assumption of constant velocity or acceleration during the
`entire prediction interval results in polynomial expressions famil-
`iar from introductory mechanics classes. Let the system delay (or
`prediction interval} be ,0. Then:
`
`2
`1
`xpredicred =x+ VPI4-Earp or xprcdicred =x+Vp
`The first type of predictor, covered in Section 4, uses the 2nd-
`order polynomial, under the assumption that position, velocity,
`and acceleration are perfectly measured.
`In practice, real systems directly measure only a subset of posi-
`tion, velocity, and acceleration, so many predictors combine the
`polynomial expression with a Kalman filter to estimate the non-
`measured states. We know of no existing system that directly
`measures all three states for orientation, and linear rate sensors to
`measure translational velocity do not exist. The Kalman filter is
`an algorithm that estimates the non-measured states from the other
`
`401
`
`Nintendo Ex. 1007
`
`

`
`measurements and smoothes the measured inputs. Section 5 de-
`rives formulas for three different combinations of Kalman filters
`
`and polynomial predictors. The combinations depend on which
`states are measured and which are estimated. These form the sec-
`ond class of predictor explored.
`Section 6 uses the formulas from Sections 4 and 5 to provide
`three main results:
`
`I) Quantifying error distribution and growth: The error in the
`predicted signal grows both with increasing frequency and predic-
`tion interval. For the 2nd-order polynomial, the rate of growth is
`roughly the square of the prediction interval and the frequency.
`This quantifies the "jitter" commonly seen in predicted outputs,
`which comes from the magnification of relatively high-fiequency
`signals or noise. For the Kalman-based predictors, we compare
`the three combinations and identify the frequencies where one is
`more accurate than the others. Theoretically, the most accurate
`combination uses measured positions and accelerations.
`2) Ertimatiag spectrttm ofpredicted st'grrat': Multiplying the
`spectrum of an input signal by the magnitude ratio determined by
`the frequency-domain analysis provides a surprisingly good esti-
`mate of the spectrum of the predicted signal. By collecting mo-
`tion spectra exhibited in a desired application, one can use this re-
`sult to determine how a predictor will perform.
`3) E.9tt‘mott’ng peak time-domain error in predicted .9t’grtot'.'
`Multiplying the input signal spectrum by the error ratio fiinction
`generates an estimate of the error signal spectrum. Adding the ab-
`solute value of all the magnitudes in the error spectrum generates
`a rough estimate of the peak time-domain error. A comparison of
`estimated and actual peak errors is provided. With this, a system
`designer can specify the maximum allowable time-domain error
`and then determine the system delays that will satisfy that re-
`quirement for a particular application.
`This paper is a short version of chapter 6 of [1]. That chapter is
`included with the CD-ROM version of this paper.
`
`3 Approach
`The frequency-domain analysis draws upon linear systems the-
`ory, spectral analysis, and the Fourier and Z-transfonns. This
`section provides a brief overview; for details please see [3] [9]
`[12] [14] [15] [16].
`Functions and signals are often defined in the time domain. A
`functionfir} returns its value based upon the time 1‘. However, it is
`possible to represent the same function in the frequency domain
`with a different set of basis functions. Converting representations
`is performed by a transform. For example, the Fourier transform
`changes the representation so the basis functions are sinusoids of
`various frequencies. When all the sinusoids are added together,
`they result in the original time-domain fiinction. The Z-transform,
`which is valid for evenly-spaced discrete functions, uses basis
`functions of the form 2*, where k is an integer and z is a complex
`number. Specific examples of equivalent functions in the time,
`Fourier, and Z domains are listed in Table 1. Note that is the
`square root of —I and to is the angular frequency. A function in
`the Fourier domain is indexed by (0, which means the coefficients
`representing the energy in the signal are distributed by frequency
`instead of by time, hence the name "frequency domain."
`The analysis in this paper makes three specific assumptions.
`First, the predictor must be linear. A basic result of linear systems
`theory states that any sinusoidal input into a linear system results
`in an output of another sinusoid of the same frequency, but with
`different magnitude and phase. If the input is the sum of many
`different sinusoids (e. g., a Fourier-domain signal), then it is possi-
`ble to compute the output by taking each sinusoid, changing its
`magnitude and phase, then summing the resulting output sinu-
`soids, due to the property of superposition. This makes it possible
`to completely characterize linear systems by describing how the
`
`Linearity
`Time shift
`
`Differentiation
`
`Time domain
`
`Fourier domain
`
`Ag(t)+Bh(t)
`
`A G(03)+BH(€°)
`
`g(! +a)
`
`21'“ G(€0)
`
`Time domain
`
`jw G (00 )
`
`2 domain
`
`
`
`Linearity
`
`A x(k)+By(.-'c)
`
`AX{z) +BY(z)
`
`2'“ 3(2)
`x(k-a)
`Time shift
`Table 11 Time. Fourier and 2 domain eguivalents
`magnitude and phase of input sinusoids transform to the output as
`a function of frequency. This characterization is called a transfer
`function, and these are what we will derive in Sections 4 and 5.
`The second assumption is that the predictor separates 6-D head
`motion into six l-D signals, each using a separate predictor. This
`makes the analysis simpler. The assumptions of linearity and sep-
`arability are generally reasonable for the translation terms, but not
`necessarily for the orientation terms. For example, quaternions
`are neither separable nor linear [2]. To use this analysis, we must
`locally linearize orientation around the current orientation before
`each prediction, assuming the changes across the prediction inter-
`val are small. By using the small angle assumption, rotations can
`be characterized by linear yaw, pitch, and roll operations where
`the order of the operations is unimportant. Another approach for
`linearizing orientation is described in [8].
`Finally, the third assumption is that the input signal is measured
`at evenly-spaced discrete intervals. This is not always true in
`practice, but this assumption does not really change the properties
`of the predictor as long as the sampling is done significantly faster
`than the Nyquist rate, and it makes the analysis easier.
`What does the ideal predictor look like as a transfer function?
`Ideal prediction is nothing more than shifting the original signal in
`time. If the original signal is g(r) and the prediction interval is p,
`then the ideal predicted signal h(t} = g(t+p). By the timeshift for-
`mula in Table 1, the magnitude is unchanged, so the magnitude
`ratio is one for all frequencies, but the phase difference is pa).
`What do input head motion signals look like in the frequency
`domain? The power spectrum shows the averaged squared magni-
`tudes of the coefficients to the basis sinusoids at every frequency.
`The square root of those values is the average absolute values of
`the magnitudes.
`Figure 1 shows such a spectrum for one
`translation axis. This data came from recording a user who had
`never been inside an HMD before, while the user walked through
`a virtual museum of objects. Note that the vast majority of energy
`is below 2 Hz, which is typical of most other data we have and
`corroborates data taken by [19]. This is one way to quantify how
`quickly or slowly people move their heads. These spectra are ap-
`plication dependent, but note that
`the equations derived in
`Sections 4 and 5 are independent of the specific input spectrum.
`Faster head motions have spectra with more energy at higher fi'e-
`quencies.
`Estimating the power spectrum of a time-domain signal is an
`inherently imperfect operation. Careful estimates require the use
`of frequency windows to reduce leakage [6]. Even with such
`steps, the errors can be significant. What this means is that the
`theoretical results in Section 6 that use estimated power spectra do
`not always perfectly match time-domain results from simulated or
`actual data. Please see [7] and [16] for details.
`
`402
`
`Nintendo Ex. 1007
`
`

`
`IIIIIIIIIIIII
`
`9
`
`200
`
`'
`r
`
`§ 130
`tjn 8
`3
`=
`E ?
`3140
`1
`IIIIIIIIIIIIIIII :2
`6
`3120
`‘l’
`: 100
`‘
`s 5
`2
`E 4
`_ A
`g 80
`t3IIIWI
`as
`Ijn E 2
`1 4 5° ms
`4°
`1 —.'C_L'dj__.%
`E 20
`_- 0- o
`""
`0
`1
`2
`3
`4
`5
`r
`Frequency in Hz
`Figure 2: Polynomial predictor
`magnitude ratio
`
` - 10
`
`0
`
`3
`2
`l:fe'3lUe”CY "1 HZ
`Figure 1: Head motion spectrum
`
`4
`
`5
`
`0
`
`l
`
`4
`2
`Frequency in Hz
`Actual
`--------- H ldeal
`Figure 3: Polynomial predictor
`phase shift
`
`5
`
`4 Polynomial-based predictor
`This sectio11 derives a transfer function that characterizes the
`
`frequency-domain behavior of a 2nd-order polynomial predictor.
`This analysis assumes that the current position. velocity. a11d ac-
`celeration are pe.5fi>.c'te’_12 known. with no noise or other measure-
`me11t errors. Even with perfect measurements, we will see that
`tl1is predictor docs 11ot match tl1c ideal predictor at long prediction
`intervals or high freqtlencies.
`Let git) be the original 1-D sig11al and Mr) be the predicted sig-
`11al_. given prediction interval p. Then the 2nd—ordcr polynomial
`predictor defines Mr) as:
`
`ht!) = g(r)+p§ (I) + ifs (I)
`Convert this into the Fourier domain. G(aJ) is the Fourier equiva-
`lent of git). At any angular frequency .30. G(a)] is a si11glc co111plex
`11umber_. wl1icl1 we define as G(aJ)
`x -jy. Then:
`mm):[1+jaJp—$-(wp)3)[x+jy)
`The transfer function specifies how the magnitude and phase
`change from the input signal. G(cu}. to the output signal. H[a)).
`These changes are in the form of a magnitude ratio and a phase
`difference.
`
`4.1 Magnitude ratio
`We know the magnitude of the input signal. We need to derive
`the magnitude of the output signal. The squared magnitude of
`H(Lu)_. after some simplification. is:
`||H(a))||3 =(x3 +y3l[l+;l—(aJ pr‘)
`Therefore, the magnitude ratio is:
`
` =«.(l+i(wp)“)
`
`(1)
`
`Figure 2 graphs equation (1) for three prediction intervals; 50
`ms, lO0 ms, and 200 ms. The ideal ratio is one at all ii"equcncics_.
`because the ideal predictor is simply a timcshift, but the actual
`predictor magnifies high frequency components, even with perfect
`measurements of position. velocity. and acceleration.
`
`4.2 Phase difference
`
`The phase or of the predicted signal Hfcu] is:
`,
`|
`2
`2
`wpx+.v-2J=p 60
`
`t3t'=tan"l{
`
`x—yctJp—§xp2 (4)2
`
`l
`
`Let 0 be the phase oforiginal signal (.-‘(0)) = x -_;'y. Apply the
`following trigonometric identity:
`tan[(x) — tan(.a)
`
`tan(tI — 6) =
`
`l + tan[0:)tan(fl]
`Alter simplification, the phase difference is:
`
`(2)
`
`Figure 3 graphs equation (2) for three prediction intervals: 50
`ms. I00 ms. and 200 ms, with the phase differences plotted in de-
`grees. Note that the ideal difference is a straight li11e. and that the
`ideal difference changes with different prediction intervals. The
`actual phase differences follow the ideal only at low frequencies,
`with the error getting, bigger at large prediction intervals or large
`frequencies. The phase differences asymptotically approach 180
`degrees.
`Note the intimate relationship between p and to in the formulas
`in Sections 4.] and 4.2; they always occur together as to p. This
`suggests a relationship between input signal bandwidth and the
`prediction interval.
`llalving the prediction interval means that the
`signal can double in frequency while maintaining the same pre-
`diction performance. That is, bandwidth times the prediction in-
`terval yields a constant performance level.
`
`5 Kalman-based predictors
`Real systems directly measure only a subset of p. 1-’. and (I, and
`those measurements are corrupted by noise. Therefore. many pre-
`dictors use the Kalman filter to provide estimates of the states p. 1-_.
`a11d or iii the presence of noise. These estimated states are then
`given to the polynomial—based predictor to extrapolate future loca-
`tio11s.
`
`This section provides a l1igl1—lcvel introduction 011 how the
`Kalman filter works, then it derives the Kalman predictor transfer
`matrix. This transfer matrix is the product of three other matrices.
`modeling, the measurements. the predictor, a11d the Kalman filter
`itself. These matrices depend upon the type of filter and predictor
`being used. We derive the transfer matrix for three cases:
`' Case 1
`: Measured position. Predictor based on x and 1:.
`° Case 2: Measured position and velocity. Predictor based
`o11 .r.
`1-‘, and o.
`° Case 3: Measured position and acceleration. Predictor
`based on I, v, and (I.
`is typical of most predictors that have been tried, being
`Case I
`solely based on the measurements from the head tracker. This
`
`403
`
`Nintendo Ex. 1007
`
`

`
`Initialize X and P
`
`Input position G{z)
`
`(Scalar)
`
`State X. Covariance P
`
`Time Update step
`( Predictor)
`
`
`
`
`
`Sensor
`inputs
`
`I Measurement Update step
`( Corrector)
`
`
`
`Predicted
`
`Extrapolate based on X —> location
`Figure 4: Kalman filter high-level dataflow
`predictor does not use acceleration because it is difficult to get a
`good estimate of acceleration from position in real time.
`Numerical differentiation accentuates noise, so performing two
`differentiation steps is generally impractical. A few predictors use
`inertial sensors to aid prediction, such as [2] [5] [11]. Tl1ese sen-
`sors are used in Case 2 and Case 3. Section 6 will compare these
`three cases against each other, using the transfer functions derived
`in this section.
`
`Throughout this section, it is position, v is velocity, at is acceler-
`ation, p is the prediction interval, T is the period separating the
`evenly-spaced inputs, and k is an integer representing the current
`discrete iteration index.
`
`5.1 The Discrete Kalman filter
`
`The Kalman filter is an optimal linear estimator that minimizes
`the expected mean-square error in the estimated state variables,
`provided certain conditions are met.
`It requires a model of how
`the state variables change with time in the absence of inputs, and
`the inaccuracies in both the measurements and the model must be
`characterizable by white noise processes.
`in practice, these condi-
`tions are seldom met, but the Kalman filter is commonly used
`anyway because it tends to perform well even with violated as-
`sumptions and because it has an efficient recursive formulation,
`suitable for computer implementation. Efficiency is important be-
`cause the flter must operate in real time to be of any use to the
`head-motion prediction problem. This section outlines the basic
`operation of the filter; for details please see [3] [9]. Since the in-
`puts are assumed to arrive at discrete, evenly-spaced intervals, the
`type of filter used is the Discrete Kalman filter.
`Figure 4 shows the high-level operation of the Kalman filter.
`The Kalman filter maintains two matrices, X and P. X is an N by
`1 matrix that holds the state variables, like x, v, and a, where N is
`the number of state variables. P is an N by N covariance matrix
`that indicates how accurate the filter believes the state variables
`
`are. After initialization, the filter runs in a loop, updating X and P
`for each new set of sensor measurements. This update proceeds in
`two steps, similar in flavor to the predictor-corrector methods
`commonly used in numerical integrators. First, the time update
`step must estimate, or predict, the values of X and P at the time
`associated with the incoming sensor measurements. Then the
`measurement update step blends (or corrects) X and P based on
`the sensor measurements. Whenever a prediction is required, the
`polynomial extrapolation bases it on the x, v, and a fi'om the
`current state X.
`
`5.2 Kalman-based predictor transfer matrix
`The following discussion is terse due to space limitations;
`please read [1] for a more thorough explanation.
`The goal is to derive a 1 by 1 transfer matrix 0(2) relating input
`position G(z) to predicted position H(z). Figure 5 shows how this
`
`Measurement generator M(Z) F by 1
`
`Measurements Y (F signals)
`
`Discrete Kalman filter
`
`C(Z)
`
`N by F
`
`Estimated states X (N signals)
`
`1byN D(z)
`
`Polynomial predictor
`
`(Scalar)
`Predicted position H(z)
`Figure 5: Kalman filter transfer function dataflow
`is done by combining the Discrete Kalman filter with the polyno-
`mial predictor. 0(2) is the product of three other transfer matri-
`ces.
`
`H(z) = 0(2) G(z), where 0(2) = D(z) C(z)M(z)
`
`0(2) is different for each of the three cases. The matrices
`needed to compute 0(2) for each case are listed in Sections 5.3 to
`5.5. Once computed, a basic result from control theory states that
`one can plot O(z)'s frequency response by substituting for 2 as
`follows [I4]:
`2 = eumr) = cos(cuT}+jsin(£uT)
`Note that z is a complex number, so the matrix routines must be
`able to multiply and invert matrices with complex components.
`Now we describe how to derive the three component t:ransfer ma-
`tiices M(z), D(z), and C(z).
`1) Measurement generator transfer matrix M(z): The t:ransfer
`function developed in Section 4 does not apply here because the
`Kalman filter treats the estimated states and measurements as sep-
`arate and distinct signals. Therefore, the predictor transfer func-
`tion is now a 1 by N matrix that specifies how to combine the state
`variables to perform the polynomial-based prediction.
`If the
`predicted position is a function of more than one measurement,
`rather than just a measured position,
`the analysis becomes
`complicated. To simplify things, we force the measurements to be
`perfectly matched with each other so that everything can be
`characterized solely in terms of the input position. This is
`enforced by the measurement generator M(z), which generates v
`and a from x by applying the appropriate magnitude ratios and
`phase shifts to the input position. If an input position x sinusoid is
`defined as:
`
`x = .MSi.t1(tt}l‘+£t)
`Then the corresponding velocity and acceleration sinusoids are:
`
`v= 0JMcos(cut+e)
`a=—CtJ2 Msin(uJr+a)
`For example, a is derived fi'om it simply by multiplying by —(D2,
`as listed in the M(z) in Section 5.5.
`2) Polynomiai predictor transfer matrix D(z): This expresses
`the behavior of the polynomial-based predictor, as described in
`Section 4. For example, if the state is based on x, v, and a, then
`the predictor multiplies those by 1, p, and 0.5p2 respectively, as
`listed in the D(z) in Section 5.4.
`3) Discrete Katmanfiiter transfer matrix C(z): Deriving a trans-
`fer matrix that characterizes the fiequency-domain behavior of the
`Discrete Kalman filter requires that the filter operate in steady-
`state mode. This will occur if the noise and model characteristics
`do not change with time. This is usually the case in the Kalman-
`based predictors that have been used for head-motion prediction.
`In our implementation, the filter converges to the steady-state
`
`404
`
`Nintendo Ex. 1007
`
`

`
`condition with just one or two seconds of input data. depending on
`the noise and model parameters and the initial P.
`In tlte steady—state cortditiori, P becontes a constant, so it is not
`necessary to keep updating it. This makes the equations for the
`time and measurement updates tnueh simpler. The time update
`becomes:
`
`X‘ [it + 1) :AX(k)
`and the measurement update becotnes:
`xrk +1):X‘(k +1)+t<[Yr'k+i}— Hx'r't-+ 1}]
`
`where
`
`- Y[lt) is art F by 1 matrix holdirtg F sensor measure-
`ments.
`
`- A is an N by N matrix that specifies the model.
`- H is an F by N matrix relating the measurements to the
`state variable X.
`
`- K is the N by F Kalman gain matrix that controls the
`blending in the measurement update.
`- X_(k I
`l) is the partially updated state variable.
`Note that only the X02) and Y(i'c) matrices change with time. Now
`combine the time and measurement update equations into one by
`solving lbr Xflri l):
`
`xrk+t)= Ax(k)+K[Yr'k+t)—HAxr'k}]
`X0’: + 1): [A— KHA]X(k)+KY(k +1}
`Convert this eqttation into the Z—domain and solve for )((z).
`
`zX{z) = [A— t(HA]x(':)+:Kvr'z)
`X[z):[zI—A+KHA] lzKY(z)
`where l is the N by N identity matrix.
`Define C[z] as the N by F transfer matrix for the Discrete
`Kalman filter. This specifies the relationship between the ft|ter's
`inputs (measurement Y and the out uts "state X :
`
`This equation shows how to compute Cfz} from the A, K, and
`H matrices listed in Sections 5.3 to 5.5. The steady-state K matri-
`ces in those three sections depend on the noise parameters used to
`tune the Kalman filter. We adjusted those parameters to provide a
`small amount of lowpass filtering on the state variable corre-
`sponding to the last sensor input iii the Y matrix. Then we ran
`each filter in simulation to determine the steady-slate K matrices.
`
`5.3 Case 1: Measured position
`N 2, F 1
`X
`
`0i‘ Y : l'Vrnerr.srrrc¢i'i
`X : L_,:|’ H 2 ll
`ii. Tl cl
`J, M(z)=[l], t)r'z)=[t
`
`41.967
`
`0.568
`
`5.4 Case 2: Measured position and velocity
`N 3,1”
`2
`X
`
`X: 1?
`ii‘
`
`‘ H:
`
`1
`
`0
`
`0
`
`l
`
`0
`
`0
`
`3
`
`:
`
`xrnr*(r\'t.rrr*r.r’
`i
`i
`i
`l"rne(r.s.=.rrt*:f
`
`1
`A: 0
`0
`
`7 gr?
`1
`T
`0
`1
`
`0.0576
`.K= 0.0034
`—0.0528
`
`0.0032
`0.568
`41.967
`
`5.5 Case 3: Measured position and acceleration
`N=3_.F=2
`
`I
`
`x: 1-’ _.l-[:
`H
`
`ll
`
`0
`
`‘’ “i
`
`0
`
`I
`
`,v:
`
`a,,,,,,,_,,,,,,,,
`
`1
`A: 0
`0
`
`r §r3
`t
`r
`0
`I
`
`0.0307
`,K= 0.342
`0.046?
`
`0.000016
`0.00345
`0.51:‘:
`
`M(z)=l_(:gl»D(:)=[1 p _%p~°]
`
`6 Results
`This section takes the transfer functions derived in Sections 4
`and 5 and uses them to determine three characteristics of the pre-
`dictor: 1) the distribution of error in the predicted signal, 2} the
`spectrum of the predicted signal, and 3) the peak tin1e—dotnait1 er-
`ror. The frequency-dotnain results are checked against ti tne-do-
`main results. where appropriate.
`
`6.1 Error distribution and growth
`We can now plot the prediction error for both the polynomial
`predictor and the Kalman—based predictors.
`1) P0l_vrrorm'trt' pr'edr'cror.' Figure 6 graplts the overall error be-
`havior of the polynomial-based predictor, using the transfer fu11c-
`tions derived itt Section 4. The plot shows the errors at three dif-
`ferettt prediction intervals. The errors grow rapidly with increas-
`ing frequency or increasing prediction interval.
`The overall error is a Root—Meari—Sqttare (RMS) ntetric. A
`problem with showing the error of the predicted signal in the fre-
`quency domain is the fact that the trattsfcr functions return two
`values_. magnitude ratio and phase shift. rather tl1an_iust one valtte.
`Both contribute to the error in the predicted signal. If the magni-
`tude ratio is large, then that term dominates the error, bttt it is not
`wise to ignore phase at low rttagrtitttde ratios. An RMS error
`metric captures the contribution from both terms. Pick an angular
`frequettey (0. Let M, be the magnitude ratio at that frequency. 0
`be the difference between the transfer function's phase shift and
`the ideal predictor's phase shift. and '1‘ be the period of the
`frequency. Then define the RMS error at that frequency to be:
`
`Il)“ll'£S(?t‘t1'Jf'(aJ:-0:‘-Mir) :
`
`ii-iii}, sit1(cu r + 0] — si11(aJ r)]3 dr
`0
`
`p]
`
`RMSerror
`
`it P tail
`
`Frequency
`Figure 6: RMS error for polynomial predictor at three predic-
`tion intervals
`
`‘n Hz
`
`405
`
`Nintendo Ex. 1007
`
`

`
`100
`
`RMSerror
`
`0.1 0.01
`
`0.1
`
`1
`
`10
`
`Frequency in Hz
`Figure 9: RMS error for Kalman predictors, 200 ms interval
`shows what the predictor generates when the input signal is the
`sum of the first three sines in the table. That predicted signal
`follows the original fairly closely. However, if we also include
`the 4th sinusoid, then the predicted signal becomes jittery. as
`shown by the “Prediction on 4 sines" curve. The last sine has a
`tiny magnitude, but it is a 60 H7. signal. One can think of it as a
`60 llz noise source. This example should make clear the need to
`avoid high—frequency input signals.
`2) K(r."ntan-brrsedp1'ed.fc.=‘m'.v_' We use the RMS error metric to
`compare the three Kalman cases and determine the frequency
`ranges where one is better than the others. These errors are co111—
`puted using the transfer matrices described in Section 5.
`Figure 8 graphs the RMS errors for the three cases at a 50 1115
`prediction interval. For frequencies under ~7 Hz. the inertial-
`based predictors Case 2 and Case 3 have lower errors than the
`non—inertial Case 1, and Case 3 is more accurate than Case 2 for
`frequencies under ~17 Hz. Figure 9 shows the errors for a 300 1115
`prediction interval. Both axes are plotted on a logarithmic scale.
`Now Case 2 and Case 3 l1ave less error only at frequencies under
`--2 llz, instead of? ll’/_ as in Figure 8.
`These graphs provide a quantitative measurement ofhow much
`the inertial sensors help head—motion prediction. At high frequen-
`cies, Case I has less error than the other two because Case I does
`not make use of acceleration. Case 2 and Case 3 use acceleration
`
`to achieve smaller errors at low frequencies at the cost of larger
`errors at high frequencies. This tradeoff results in lower overall
`error for the inertial—based predictors because the vast majority of
`head—motion energy is under 2 Hz with today's HMD systems. as
`shown in Figure l. The graphs also show that as the prediction
`interval increases, the range of frequencies where the prediction
`benefits from the use of inertial sensors decreases.
`
`Case 3 is more accurate than Case 2 at low frequencies. because
`Case 3 l1as better estimates of acceleration. Case 2 directly mea-
`sures velocity hut must estimate acceleration through a numerical
`differentiation step. This results in estimated accelerations that
`are delayed it1 time or noisy.
`l_I‘l contrast, Case 3 directly measures
`acceleration and estimates velocity given both measured position
`and acceleration, which is a muclt easier task. Case 3 is able to
`get nearly perfect estimates of velocity and acceleration. Since
`Case 2 represents using velocity sensors and Case 3 represents us-
`ing aeeelerometers, this suggests that in theory, acceleration sen-
`sors are more valuable than velocity sensors for the prediction
`problem.
`When the individual predictors are combined into a full 6-I)
`predictor, the errors still increase dramatically with increasing
`
`Sinusoid = MSln(2TE ft+ o): Mag. M, Freq. f, Phase 9
`
`Magnitude
`
`Original
`
`Prediction
`on 3 sinas
`
`9 3
`
`7
`
`§ 6
`: 5
`4
`I3 3
`
`Frequency (in Hz) Phase (in radians)
`
`
`2 UEZIII mumon 4 sinas
`
`1
`
`0
`
`............ ..
`
`0
`
`0.1
`
`0.2
`
`0.3
`
`0.4
`
`Timestamp in seconds
`Figure 7: Polynomial prediction on 3 and 4 sinusoids, 30 ms
`prediction interval
`The magnification of high-frequency components shown in
`Figure 6 appears as "jitter" to the user.
`Jitter makes the user's
`head location appear to "tremble" at a rapid rate. Because the
`magnification factor becomes large at high frequencies. even tiny
`amounts of noise at high frequencies can be a major problem.
`We show this with a specific example of polynomial prediction
`on four sinusoids at a 30 ms prediction interval. Table 2 lists the
`four sinusoids. Figure 7 graphs these sinusoids and the predicted
`signals. The predicted signals are computed both by si1111tlating
`the predictor in the time domain and by using the frequency-do-
`111ain transfer fitnctions to change the four simtsoids' magnitudes
`and phases: both approaches yield the same result. The "Original"
`curve is the sum of the sinusoids. The "Prediction o11 3 sines"
`
`14
`
`10
`
`oMA:3:
`
`RMSerror
`
`
`
`
`
`
`
`
`
`
`IIIIIIIII!
`
`12
`IIIIIIIIEI
`IIIIIIIHII
`IIIIIIIIII
`IIIIlfl!III
`III!£iliE!cm1
`IQEQIIIIII
`
`Case 3
`
`Case 2
`
`
`
`
`
`
`
`
`
`0
`
`2
`
`4
`
`5
`
`B 101214161820
`
`Frequency in Hz
`Figure 8: R

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket