throbber
624
`
`
`
`
`
`IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. COM-22,
`
`
`
`
`
`NO. 5, MAY 1974
`
`’ tion,” IEEE Trans. Commun. Technol.,
`vol. COM-19, pp.
`992-1006, Dec. 1971.
`[4] A. Habibi and P. A. Wintz, “Image coding by linear trans-
`formations and block quantization,” IEEE Trans. Comnlun.
`Technol., vol. COM-19, pp. 50-62, Feb. 1971.
`[5] J. J. Y. Huang and P. M. Schultheiss, “Block quantization of
`correlated Gaussian random variables,” IEEE Trans. CottLmun.
`Syst., vol..CS-ll, pp. 289-296, Sept. 1963.
`[6] A. Habibl, “Comparison of nth-order DPCM encoder with
`linear transformations and
`block quantization techniques,”
`IEEE Trans. Conmun.
`l’echnol., vol. COM-19, pp. 948-956,
`Dec. 1971.
`171 P. A. Wintz and A. J. Kurtenbach, “Waveform error control
`in PCM telemetry,” IEEE Trans. Inform. Theory, vol. IT-14,
`pp. 6.50-661, Sept. 1968.
`[8] A: Habibi, “Performance of zero-memory quantizers using rate
`dlstortlon criteria,” to be published.
`[9] B. Smith, “Instantaneous companding of quantized signals,”
`Bell Slyst. Tech. J., vol. 36, pp. 44-48, Jan. 19.51.
`[lo] P. J. Ready and P. A. Wintz, “Multispectral data compression
`through transform coding and block quantization,” School of
`Elec. Eng., Purdue Univ., Lafayette, Ind., Tech. Rep. TR-EE
`72-29, May 1972.
`[ll] N. Ahmed, T. Natarajan, and K. It. Rao, “Discrete
`cosine
`transform,” IEEE Trans. on Comput., vol. C-23, pp. 90-93,
`Jan. 1974.
`[12] W. I<. Pratt, L. 13. Welch, and W. Chen, “Slant transforms for
`
`image coding.” Proc. Symp. Application of Walsh Functions,
`Mar. 1972.
`
`*
`
`Ali Habibi (S’64-M’70) received the B.S.
`degree
`from West Virginia University,
`Morgantown, and the M.S. degree from
`Cornel1 University, Ithica, N. Y., and the
`Ph.D. degree
`from Purdue
`University,
`Lafayette, Ind., in 196.5, 1967, and 1970 re-
`spectively, all in electrical engineering. While
`at Cornel1 University he was a Research
`Assistant, in the System Theory Group, and
`from 1967 to 1970 he was a Research
`In-
`structor a t Purdue University. From 1970 tc
`1971 he was employed by Bell Laboratories, Holmdel, N. J., working
`in the Telephone and Picturephone Planning Center. Since 1971 hc
`has been with the University of Southern California, Los Angeles
`Calif., as an Assistant Professor of Electrical Engineering.
`I n 1972 Dr. Habibi received Leonard Abraham Prize Paper awarc
`from the IEEE Communications Society for his contribution to thc
`technical literature in communications. He is a member of Eta Kappr
`Nu, Tau Beta Pi, Sigma Tau Sigma, and Sigma Xi.
`
`Adaptive Maximum-Likelihood Receiver for Carrier-Modulated
`Data-Transmission Systems
`
`GOTTFRIED UNGERBOECK
`
`Abstract-A new look is taken at maximum-likelihood sequence
`estimation in the presence
`of intersymbol interference. A uniform
`receiver structure for
`linear carrier-modulated data-transmission
`systems is derived which for decision making uses a modified version
`of the Viterbi algorithm. The algorithm operates directly on .the
`output signal of a complex matched
`filter and, in contrast to the
`original algorithm, requires no squaring operations; only multiplica-
`tions by discrete pulse-amplitude values.are needed. Decoding of
`redundantly coded sequences is included in the consideration. The
`reason and limits for the superior error performance of the receiver
`over a conventional receiver employing zero-forcing equalization and
`An adjustment
`symbol-by-symbol decision making are explained.
`algorithm for jointly approximating the matched filter by a trans-
`versal filter, estimating intersymbol interference present at the
`transversal filter output, and controlling
`the demodulating carrier
`phase and the sample timing, is presented.
`
`T HE design of an optimum receiver for synchronous
`
`I. INTRODUCTION
`
`data-transmission systems that employ linear carrier-
`
`Paper approved by the Associate Editor for Communication
`Theory of
`the IEEE Communications Society for publication
`received May 18, 1973;
`without oral
`presentation. Manuscript
`revised December 21, 1973.
`The author is with the IBM Zurich Research Laboratory, Itusch-
`likon, Switzerland.
`
`L
`
`modulation techniques [l] is discussed. All forms of digita
`(AM) , phase modulation
`(PM)
`amplitude modulation
`and combinations thereof, are covered in a unified manner
`Without further mention in this paper, the results are alsc
`--
`applicable to baseband transmission.
`In synchronous data-transmission systems intersymbo
`interference (IN) and noise, along with errors
`in thl
`dcmodulating carrier phase
`and the sample timing, art
`the primary impcdiments
`to reliable data reception [l]
`The goal o f this paper
`is to present a receiver structurl
`that deals with all thesc effects in an optimum way an(
`an adaptive manncr. In deriving the receiver the concep
`of maximum-likelihood (NIL) sequence cstimation [a], [3:
`will bc applied. This assures that the receiver is optimun
`in thc sense of sequence-error probability, provided tha
`data sequences have equal a priori probability.
`caI
`The modulation schemes considered in this paper
`be viewed in the framework of digital quadraturc ampli
`tude nwdulation (&AM) [l]. They can thereforc bc repre
`sentcd by an equivalent linear baseband-model that differ,
`from a real baseband system only&
`the fact t&t$gna18
`and channel rsponses arc complcx functions [2], [4], [5]
`Conventional receivcrs for synchronous data signal:
`
`Exhibit IV 2014
`IPR2014-01149
`
`

`

`UXG1,:RBOECK: ADAPTIVE MAXIMUM-LIKELIHOOD RECEIVER
`
`_-I
`
`--------------.-~
`
`_c_-
`
`_
`
`~
`
`~
`
`.
`
`
`
`, ) w
`whitening the noise. In a different form the algorithm has .' $73
`(J'1?-
`already
`
`
`
`and
`
`
`subtracting IS1 terms1
`
`
`been used for estimating
`disturbed - .~ binary -- data sequences [_17]. Here the
`- from _
`algorithm is restated and extended so that it performs ML
`decisions for complex-valued multilevel sequences. &mr-
`entlv, Mackechnie [31 - J has independently found
`the same
`algorithm.
`In Section I1 we define the nlodulation scheme. The
`general structure of
`the maximum likelihood receiver
`(RILR) is outlined in Section 111. In Section IV we derive
`the modified Viterbi algorithm. The error performance of
`the MLR is discussed in Section V and compared with the
`error performance of the conventional receiver. Finally, a
`fully adaptive version of the MLR is presented in Sec-
`tion VI.
`
`11. MODULATION SCHEME
`We consider
`a synchronous linear carrier-modulated
`of
`data-transmission system with coherent demodulation
`the general form shown in Fig. 1. By combining in-phase
`and quadrature components into complex-valued signals
`. . .
`(indicated by heavy
`lines in Fig. 1), all linear carrier-
`modulation schemes can be treated in a concise and uni-
`[ Z ] , [4], [.SI. Prior to modulation with
`form manner
`carrier frequency
`a,, the receiver establishes a
`pulse-
`amplitude modulated (PAM) signal of the form
`a, f ( t - n T )
`
`comprise a linear receiver filter or equalizer, a symbol-rate
`sampler, and a quantizer
`for establishing symbol-by-
`symbol decisions. A decoder, possibly with error-detection
`and/or error-correction capability, may follow. The pur-
`pose of the receiver filter is to eliminate intersymbol inter-
`ference while maintaining a high
`signal-to-noise ratio
`(SNR) . It has been observed by several authors [l], [GI-
`[IS] that for various performance criteria
`the optimum
`linear receiver filter can be factored as a mat'ched filter
`( M F ) and a transversal filter with tap spacings equal to
`;\IF establishes an optimum
`the symbol interval. The
`SSR irrespective of the residual IS1 at its output. The
`transversal filter thcn eliminates or a t least reduces inter-
`symbol interference
`at the expense of diminishing the
`SNR.
`If symbols of a data sequence are correlated by some
`coding law, a better way than making symbol-by-symbol
`decisions is to base decisions on thc entire sequence
`received. The Sam(: argument holds true if data sequences
`are disturbed by ISI. The correlation intxoduced by IS1
`between successive sample values is of discrete nature as
`in the case of coding, in the sense that a data symbol can
`be disturbed by adjacent data symbols
`only in a finite
`___~__
`number of ways. IS1 can even be viewed as an unintended
`__
`- -
`form
`of . . part&l_r_esponsc coding. [l]. Receivers that per-
`form sequence
`way exploit
`decisions or in some other
`the discreteness of IS1 exhibit highly nonlinear structures.
`Decision feedback equalization [lo], [ll] represents the
`-
`.earliest
`
`step in this direction. Later, several other nonlinear
`where thc sequence {a,} represents the data symbols, 1' is
`rcccivcr structures \\-ere ddscribed [12]-[17].
`In view of
`the symbol spacing, and f ( t ) denotes the transmitted base-
`the present state of the art, many of thesc approaches can
`band signal ckment. Gcnerally, {a,} and f ( t ) may be
`be regarded as attempts to avoid, by nonlinear processing
`complex (but usually only one of thcm is; see Table I) . l The
`
`methods, noise enhancement which would other)) rise ' occur
`--
`- - ..
`data symbols arc: selected from a finite alphabet and may
`if IS1 n-ere eliminated by linear filtering.
`- -.
`- -
`possibly susccd one anothcr only in accordance with some
`-
`A new nonlinear receiver structurc was introduced
`by_
`redundant coding rule.
`F o r n g [lS]. The receiver consists of a "tvhitened" M L p
`Assuming a linear dispersive transmission medium with
`(i.e., an NIF followed b y a transvcrsal filter that whitcns
`noise tu,( 2) , the
`impulse response g, ( t ) and additive
`non-
`the noise), a symbol-rate sampler, and a recursive
`receiver will observe the real signal
`linear processor that employs the Viterbi algorithm
`in
`& ( t ) = & ( t ) * Itc { G z ( t ) cxp ( j U J ) } + & ( t )
`order to perform AilL sequence decisions. The Viterbi
`(2)
`algorithm was originally invented for decoding of con-
`where * dcnotcs convolution. One side of thc spectrum of
`volutional codcs [19]. Soon thereafter the algorithm was
`yc( t ) is redundant and can therefore be clitninatcd without
`shown to yield AIL scqucncc decisions and that it could
`loss of information; thc remaining part must be transposed
`be regarded as a specific form of dynamic programming
`back in the baseband. In Fig. 1 we adhere to the conven-
`[20]-[22].
`Its applicability to receivers for channels with
`tional approach of demodulating by transposing first and
`intersymbol intcrfercncc and corrclative !eye1 coding was
`thcn eliminating conlponcnts around
`twice thc carrier
`A
`and Tcobayashi [24]-[26].
`
`noticed . by . . Omura
`frcqucncy. The demodulated signal thus becomes
`survey on the Vitcrbi algorithm is iiven b y F'orney [27].
`Very reccntly, adaptive versions of Forney's receiver have
`_--
`[as], [29], and its
`_ _ _
`been proposed
`combination with
`eyu&zatior!_has he<!uggcsted, r301.
`. decisio-n:feedback
`In l'orney's [lS] receiver, whitening of the noise is
`essential bccause the Vitcrbi algorithnl requires that noise
`components of successive samples be -statistically inde-
`pendent. In this paper a receiver similar to that of E'orncy
`will be described. The receiver cnlploys a modified Viterbi
`algorithm that operates directly on the A!W output without
`
`~~~~
`
`~
`
`'
`
`.
`
`~~
`
`~
`
`~ -
`
`.
`
`
`
` .
`
`.- .
`
`where
`
`(1)
`
`x ( t ) =
`
`L-- ~.
`
`.
`
`a,h(t - ,nT) + ' l U ( t )
`
`( 3 )
`
`y ( t ) =
`
`'
`
`/ l ( t ) = [ l / e ( t )
`
`cxp ( - j d - j , c ) l *f(t)
`= g ( t ) * S ( t )
`
`(4)
`
`\_ . - ~. 1
`
`A more general class of PAM signals is conceivable where
`f ( t ) deperlds
`or a,.
`
`

`

`626
`
`I E E E . TRANSACTIONS ON COMMUNICATIONS, MAY 1974
`
`
`
`
`
`T R A N S M I T T E R T R A N S M I S S l o N
`
`MEDIUM
`
`RECEIVER
`
`p e j w c t
`
`WC(t1
`
`z e - I " c t - I ' p c
`
`- real signal - complex signal
`Fig. 1. General linear carrier-modulated data-transmission system.
`
`
`
`
`
`Modulation Scheme
`
`TABLE I
`( a n I
`
`real
`
`
`
`DSB-AM
`SSB, VSB-AM
`P M
`AM-PM
`
`real
`
`real
`I complex 1 = 1
`
`complex
`
`f 0)
`
`real .
`complexs
`real
`
`* SBB: f(t) = fl(t) f jX(fl(t)], where X is the Hilbert trans-
`form [I].
`
`and
`
`w ( t ) = .\riw,(t) exp ( -&J - jpc).
`( 5 )
`In (5) the effect of low-pass-filtering the transposed noise
`is neglected since it affects only noise components outside
`the signal bandwidth of interest. Our channel model does
`not include frequency offset and phase jitter. It is under-
`stood that the demodulating carrier phase pc accounts for
`these effects.
`
`111. STRUCTURE OF THE MAXIMUR'I-
`LIKELIHOOD RECEIVER
`The objective of the receiver is to estimate {a,) from a
`given signal y ( t ) . Let the receiver observe y ( t ) within a
`time interval I which is supposed to be long enough so
`that the precise conditions at the boundaries of I are
`insignificant for the total
`observation. Lct
`{a,) be a
`'hypothetical sequence of pulse amplitudes transmitted
`during I . The NILR by its definition [a], [SI determines
`as the best estimate of
`(a,} the sequence {a,) = {&}
`that maximizes the likelihood function p [ y ( t ) ,t C I 1 {a,}].
`In the
`following paragraphs the shape
`of the signal
`element h(t) and the exact timing of received signal ele-
`ments are assumed to be known. The noise of the trans-
`mission medium is supposed to be stationary Gaussian
`noise with zero mean and autocorrelation function W , ( T ) .
`From (5) the autocorrelation function of ~ (
`is obtained
`t
`)
`
`as
`w(T) = ~ [ s ( t ) t ~ ( t +
`(6)
`= 2W,( T ) exp ( - j w , ~ ) .
`l?or example, if w c ( t ) is white Gaussian noise (WGN) with
`double-sided spectral density No, then WC(7) = N o 6 ( 7 )
`and W ( 7 ) = 2NoS ( T) . In view of this important case it is
`appropriate to introduce
`
`= @(-T)
`
`If {a,} were the actual
`sequence of the pulse amplitudes
`transmitted during I, then
`
`w(tI {a,)) = y ( t ) - c a,h(t - n T ) ,
`
`t E I
`
`(8)
`
`nTeI
`. Hence,
`n u s t be the realization of the noise signal ~ (
`t
`)
`
`owing to the Gaussian-noise assumption, the likelihood
`function becomes (apart from a constant of proportional-
`ity) [32]
`PCY(t),t E I
`
`I
`
` {a741 = PCw(tl{anl)l
`{
`
`-exp
`
`- 4;o[
`
`~
`
`[Zb(t'I
`
`{an})K-'(t1- t 2 )
`
`*tu( tz I {a,)) CZtl cztz
`
`(9)
`
`where K-I( T) is the inverse of K ( T )
`K ( T ) * K-'(T) = 8 ( T ) . '
`(10)
`(9) for the complex-signal case is
`The correctness of
`proven in Appendix I. Substituting (S) into (9) and con-
`sidering only terms that depend on { a,} , yields
`
`s1 = 11
`
`a t 1 - iT)K-'(t1 - t s ) k ( t z - k T ) d t l c l t 2
`
`
`(13)
`
`I
`I
`1 = k - i.
`-
`- 3-1,'
`The quantities zn and s1 can be interpreted as sample
`values taken at the output of a complex MF with impulse
`response function2
`gRIF(t) = A( - t ) * K-'(t).
`(14)
`The derivation presented is mathematically weak in that
`it assumes I<-'(t) exists. This is not the case if the spectral
`power density of the noise becomes zero somewhere along
`the frequency axis. The difficulty can be avoided by de-
`fining. zn and si in terms of the reproducing-kernel Hilbert
`space (RKHS). approach [33], [34]. Here it is sufficient
`to consider the frequency-domain equivalent of (14) given
`by
`
`CXF(f) = B(f)/K(f)
`(15)
`wherc C,,,( f ) , H ( f ) , and K ( f ) are the Fourier trans-
`forms of y l c F ( t ) , h ( t ) , and K ( t ) , rcspcctively. It follows
`from (15) that ghIF(t) exists if the spectral power density
`
`

`

`UNGERBOECK: ADAPTIVE MAXIMUM-LIKELIHOOD RECEIVER
`
`627
`
`1
`
`of the noise does not vanish within the frequency band
`where signal energy is received. Only in this case have we
`a truly probabilistic (nonsingular) receiver problem. It
`can,easily be shown that
`an-2S-2 + rn
`zn = gMF(t) * y ( t ) lt=nT =
`8-2 = gMF(t) * h(t) I t = W = s--2
`(17)
`and that the covariance of the noise samples r, reads
`Rt = E(PJ,+~) = 2Nost.
`Since the noise of the transmission medium does not
`exhibit distinct properties relative
`to the demodulating
`carrier phase, the following relations must hold:
`ECRe ( m ) Re ( m + J ] = E[Im (rn) Im (rn+d1
`= No Re (st)
`(19)
`
`(16)
`
`(18)
`
`ECRe (rn) Im (rn+d] = - N I m (rn) Re ( ~ , + t ) ]
`= No Im ( s i ) .
`(20)
`The similarity of s1 and R2 expressed by (18) implies that
`so 2 I sl I and that the Fourier transform of the sampled
`signal element { s t ] is a real nonnegative function
`s1 exp (--j27rjZT) 2 0
`S*( f ) =
`
`(21)
`
`1
`with period 1/T. Clearly, the RIF performs a complete
`phase equalization, but does not necessarily eliminate IS1
`(ISI: s1 # 0 for 1 # 0). The main effect of the M F is that
`it maximizes the SNR, which we define as
`
`S/NMF
`instantaneous peak power of a single signal element
`- -
`average power of t’he real part of the noise
`
`so2
`- - so
`-
`-
`-
`E { R e ( ~ 0 ) ~ ) N O ‘
`
`( 2 2 )
`
`The part of the receiver which in Fig. 1 was left open can
`now be specified, as indicated in Fig. 2. From (16) it com-
`prises a M F and a symbol-rate sampling device sampling at
`times nT. It follows a processor, called maximum-liltelihood
`sequence estimator (MLSE) , that determines as the most
`likely sequence transmitted the sequence {a,) = { &,I
`that
`maximizes the likelihood function given by (11) , or equiva-
`lently, that assigns the maximum value to the metric
`2 Re (En&) -
`J I ( { a11 ) =
`s i s i - k f f k .
`iTeI k T d
`?IT EI
`The values of sz are assumed to be known. The sequence
`contains all relevant information available about {a,)
`{ 2.)
`and hence forms a so-called set of sufficient statistics [a],
`[3]. The main difficulty in finding { & ) lies in the fact
`that { 2% 1 must only be sought among discrete sequences
`
`( 2 3 )
`
`Fig. 2. MLR structure.
`
`{ a n ) which comply with the coding rule. The exact solution
`to this
`discrete maximization problem is presented in
`Section IV.
`Solving the problem approximately by first determining ‘
`the nondiscrete sequence {CY,) = { zLn) that maximizes
`( 2 3 ) and then quantizing the elements of { z L n ] in inde-
`. - - - ~ -
`to the optimum
`pendent symbol-by-symbol fashion, leads
`.
`conventional receiver [SI. Applying the familiar calculus
`of variation to ( 2 3 ) , one finds that { zLn] is obtained from
`{ zn } by a linear transversal filter which, having the transfer
`function l/S*( f ) , eliminates ISI. The series arrangement
`of the RIF and the transversal
`filter is known as the
`optimum linear equalizer, which for zero IS1 yields maxi-
`[SI. Calculating the SNlt at the trans-
`mum SNR [C],
`versal filter output in a nlanncr equivalent to ( 2 2 ) , gives
`
`~
`
`_ /
`
`(24)
`This reflects the obvious fact that, since the M F provides
`the absolutely largest SNR, elimination of IS1 by a subse-
`quent filter must diminish the SNR. Equation ( 2 4 ) indi-
`cates, however, that a significant loss will occur only if
`somewhere along the frequency axis S*( f ) dips consider-
`ably below the average value.
`For systems that transmit only real pulse amplitudes,
`,
`i.e. , double-sideband amplitude modulation (DSB-AM)
`vestigial-sideband amplitude modulation (VSB-AM) , and
`single-sideband amplitude modulation (SSB-AM) , it fol-
`lows from ( 2 3 ) that only the real output of the MF is
`relevant. i n those cases S*( f) should be replaced by
`$[S*( -f) + X*( f ) ] without further mention throughout
`the paper.
`
`IV. MAXIMUM-LIKELIHOOD SEQUENCE
`ESTIAIATION
`In this section the exact solution to the discrete maxi-
`( 2 3 ) is presented. The MLSE
`mization problem of
`algorithm that will be derived determines the most likely
`sequence { 2,)
`that satisfy the
`among sequences {CY,)
`coding rule. Clearly, the straightforward approach of com-
`puting J I ( {CY,)) for all sequences allowed, and selecting
`the sequence that yields the maximum value, is impractic-
`able in view of the length and number of possible messages.
`Instead, by applying the principles of dynamic program-
`
`

`

`628
`
`IISEE TRANSACTIONS
`
`O N COMMUNICATIONS, MAY
`
`1974
`
`ing [22], we shall be able to conceive a nonlinear recursive
`algorithm that performs the same selection with greatly
`reduced computational effort. The MLSE algorithm thus
`obtained represents a modified version of the well-known
`Viterbi algorithm [lS]-[21], [23]-[30].
`The algorithm is obtained, observing sz = s-1, by first
`realizing from (23) that Jr ( { a,} ) can iteratively be com-
`puted by the recursive relation
`
`Jn("',a,-l,a,) = Jn--l('",an-l)
`
`+ Re [a, (22, - soan - 2
`
`S,-~CX~) 1.
`
`ssn-1
`
`( 2 5 )
`Conditions concerning t'he boundaries of I are not needed
`since once we have a recursive relationship, the length of I
`becomes unimportant. We now assume that at the R4F
`output IS1 from a particular signal'element is limited
`to
`L preceding and L following sampling instants:
`1
` > L.
`I
`1
`sz = 0,
`Changing indices we obtain from ( 2 5 ) and (26)
`
`(26)
`
`Jn(. * ',a,-l,an) = J n - l ( .
`
`* .,an-1)
`
`+ Re [an(22, - soa, - 2
`
`L
`
`1=1
`
`sza,-z) 3.
`
`
`
`.
`
`,
`
`(27)
`We recall that sequences may be coded. For most trans-
`mission codes a state representation is appropriate.
`Let
`p j be the state of the coder after a j has been transmitted.
`The coding state p j dctermines which sequences can be
`-
`further transmitted. Given p i and an allowable sequence
`- ,aj+k, the state pj+r; is uniquely determined as
`( ~ ~ + ~ , a ~ + ~ ,
`(28)
`uj: aj+11aj+21* ',ai+k
`---f P j + k .
`The sequence of states { p j } is Markovian, in the sense that
`
`Pr ( u j I ~ ~ - 1 ~ p j - 2 , * : ) = Pr ( p j I ~ j - 1 ) .
`Let us now consider the metric
`
`J n (pn-t : ay,-L+1,.
`
`* , a n )
`
`= Jrl(un)
`max
`-
`-
`I...
`
`- W - L
`. u ~ - ~ - - l , a ~ - ~ 1
`
`{Jn( *
`
`',an--L,a,-L+l,'
`
`- , a n ) I
`
`(29)
`where the maximum is taken over all allowable sequences
`{... ,a,-L-l,an-L} that put the coder into the state p n - ~ .
`In accordance with the VA literature, J , is called survivor
`metric. There exist as many survivor metrics as there are
`survivor states
`un L& P ~ - L : an-L+1, * * . ,an.
`Clearly, the succession of states { un 1 is again Markovian.
`Associated with each u, is a unique path history, namely,
`* * -,an-~-l,an-LJ, which in (29) yields the
`the sequence
`
`maximum. With respect to u, this sequence is credited
`ML among all other sequences. It is not difficult to see
`that the further one looks back from time n - L, the less
`will a path history depend on the specific U, to which it
`belongs. One can therefore expect that all U , will have a
`common path history up to some time n - L - m; m being
`a nonnegative random variable. Obviously, the common
`portion of tbe path histories concurs with the most likely
`seyuence {e, ] for which we are looking.
`The final. step in deriving the MLSE algorithm is
`to
`apply the
`lnaxinlum operation defined by (29) t o (27).
`Introducing the notation of a survivor metric also on the
`right-hand side, we obtain
`
`J,(u,) = 2 Re (anxn) + nlax
`
`Ian-11 -0"
`
`{J",-l(u,-l) - F(u,-l,un) }
`
`(31)
`that
`
`where the maximum is taken over all states { u,-1}
`have u, as a possible successor state, and
`E,s~o(, + 2 Re ( E ,
`L
`(32)
` tan-^).
`Verifying (31) , the reader will observe that L is just the
`minimum number
`of pulse amplitudes that must
`be
`associated with u,. Thus, L takes on the role of a constraint
`length inherent t'o ISI. Equation (31) enables us to calcu-
`late survivor metrics and path histories in recursive fash-
`ion. The path history of a particular u, is obtained by
`extending the path history of the unpl1 which in (31) yields
`the maximum by the a,-L associated with the selected ~ ~ - 1 .
`At each sa.mpling instant n , survivor metrics and path
`histories must be calculated for all possible states u,.
`Instead of expressing path histories in terms of pulse
`they could also be represented in
`any
`amplitudes cy,-L,
`other one-to-one related terms.
`This concludes the essential part in the derivation of the
`MLSE algorithm. The algorithm can be extended to pro-
`vide maximum a poste,riori probability (MAP) decisions,
`as is shown in Appendix 11. However, for reasons given
`there, the performance improvement which thereby can be
`attained will usually be insignificant.
`The. algorithm is identical to the original Viterbi algo-
`rithm if there is no IS1 at the MF output, i.e., F ( u,-l,u,) =
`ansoan. In the presence of ISI. the algorithm differs from
`the original Viterbi algorithm in that it operates directlJ
`on the M F output where noise samples are correlated
`to (18). For the original Viterbi .algorithm
`independence of the noise samples is essential
`the MF output noisc
`filter which thereby reduces the number
`
`F(U7,-1,u,)
`
`/=I
`
`values of the signal element from 2L + I
`length inherent to IS1 is therefore
`
`
`both the
`
`
`Viterbi algorithm
`as
`uscd by Forney and its modified version presented in this
`iowel
`Section.s Clearly, this indicates fundamental
`
`3 G. D, F
`
`~
`~
`~
`~
`private communication.
`
`~
`
`~
`
`,
`
`
`
`

`

`UNGI,;RBOIZCK: ADAPTIVli MASIMUM-LIK1<LIHOOD RECEIVER
`
`629
`
`~
`
`~~
`
`( 0 )
`
`-. .
`
`(b)
`(a) State-transition diagram of binary (a, = (0,lI) run-
`Fig. 3.
`length-limited code with runs 5 2 . (b) State-transition dlagram
`of corresponding survlvor states for L = 1.
`
`-.
`
`u4
`
`limit to the complexity of MLSE. Yet the modified Viterbi
`algorithm offers computational advantages in
`that the
`large number of squaring operations needed for the original
`Viterbi algorithm C29] are no longer required. Only the-
`simple mu1Jblications by_ discrete pulse amplitude values
`-. - - --
`(~,z,)-~m_u_s& be executed in real- tin1.e. It
`occurring
`in .. Re
`, was observed by Price [35] that Forney's whitened IMF f-
`__ -
`section of a
`is identical to the optimum linear input
`decision-feedback receiver. In Section VI we shall see that
`basically the same principle can be applied
`in order to
`-
`and the whitened AiIF in adaptive form. -
`realize 'the MF
`Hence in this respect the two algorithms are about equal.
`We shall now illustrate the algorithm' by
`a specific
`
`example. Let us consider a simple binary . yun-lengthr-
`--
`limited code with a, E ( 0 , l j and runs
`no longer than
`two of the same symbol. The state-transition diagram of
`this code is shown in Fig. 3 (a). According to (30), with
`L = 1 the following survivor states and allowed transi-
`tions between them are obtained:
`u1 a (p1:l) -+ ( p 3 : ( 0 , l ) ) a { U 4 , U 5 ]
`uz p ( p 2 : O ) --+ (p1:l) 4 rT1
`u3 a (p"1) -+ ( p 3 : ( 0 , l ) ) a ( U * , U 5 ]
`( p 3 : 0 ) + ( p 2 : {o,I} a ( u 2 , u 3 ]
`--+ (p4:0) n u6
`u5 a
`p (p4:0) -+ ( p z : (0,1]) a { u 2 , u 3 ] .
`The corresponding state-transition diagram is depicted in
`Fig. 3(b). By introducing the time parameter explicitly,
`we obtain Pig. 4, in which allowed transitions are indicated
`by dashed
`lines (the so-called trellis' picture [20]). The
`solid lines, as an example, represent the path histories of
`the six possible states U, and demonstrate their tendency
`to merge at some time n - I, - 171 into a common path.
`As the algorithm is used to compute the path histories of
`the states u,+i, i = 1,2,..., new path-history branches
`appear on the right, whereas certain existing branches are
`some random time lag L + m, nz 2 0, a common path
`not continued further and disappear. In this \yay, with
`history develops from left to right.
`In order to obtain the output
`sequence ( & ] only the
`,
`last, say, M
` pulse amplitudes of each path history have
`to be stored. A4 should be chosen such that the probability
`for m > A4 is negligible compared with the projected error
`probability of the ideal system (infinite M ) .' Then, a t
`time 11, the an-L-br of all path histories will with high
`probability be identical; hence any one of them can be
`taken as a h n - ~ - ~ . The path histories can now be shortened
`by the a,-L-bl. Thus the path histories are kept at length
`of L + M
`lw, and a constant delay through the MLSE
`symbol intervals results. For
`decoding of convolutional
`codes the value of
`has been discussed in the literature
`
`9,,,,p
`H , \ , p
`
`n - L - M
`" n - L - M
`
`" n - L
`
`
`
`" n - 2
`
`
`
`' n - L - M + l
`
`
`
`O
`
`0
`
`O
`
`0
`
`O
`
`0
`
`O
`
`0
`
`" - L - m
`
`" - 3 " - 2 n - L
`
`n
`
`" + l
`" n r l
`
`"n
`
`L .T
`
`S t o r e d path hlstorles
`
`( L = I l
`Fig. 4. Time-explicit representation of Fig. 3(b) (dashed lines)
`and illustration of path histories at time n (solid lines).
`
`reduction of M , is to take a,-L-,br
`for the path history
`(31)
`corresponding to the largest survivor metric. From
`it is clear that without countermeasures
`the survivor
`metrics would steadily increase in value. A suitable method
`of confining the survivor metrics
`to a finite range is to
`subtract the largest J,-1 from all 7, after each iteration.
`Since the receiver of this paper realizes the same decision -
`
`V. ERROR PERFORMANCE
`
`rule as Forney's receiver [lS], it is not surprising that
`-identical error performance willbe found. In this section,
`following closely Forney's approach, we present a short
`derivation of the error-event probability for the modified
`Viterbi-algorithm case. The influence of IS1 present at
`the MF output on the error performance of the MLR is
`discussed, and bounds for essentially no influence are given
`in explicit form. The results are compared with the error
`
`

`

`630
`
`IEEE TRANSACTIONS ON COMMUNICATIONS, MAY 1974
`
`the receiver. Then
`
`le,) = {&I - {a,}
`(33)
`is the error sequence. Since consecutive symbol errors are
`generally not independent of each other, the concept of
`sequence error events must be used. Hence, as error events
`we consider short sequences of symbol errors that intui-
`tively are short compared with- the mean time between
`them and that
`occur independent1.y of each other. Pre-
`suming stationarity, the beginning of a specific error event
`& can arbitrarily be aligncd with time 0:
`E : { e f c } = ..., O,O,eo,el,...,elr,O,O,...;
`1 eH I 2 6 0 ,
`I eo I,
`H 2. 0.
`(34)
`Here 6,, denotes the minimum symbol error distance
`6o = min { 1 a(i) - a ( k ) 1 1 .
`
`
`(35)
`
`iZk
`We are not concerned with the meaning of error events in
`terms of erroneous bits of information.
`Let E be the set of events & permitted by the trans-
`mission code. For a distinct event & to happen, two sub-
`events must occur:
`
`sequence;
`
`E l : {a,) is such that j a,) + j e,} is an allowable data
`&z: the noise terms are such that {a,) + {e,) has RiIL
`(wit.hin the observation interval) .
`It is useful to define beyond that the subevent
`.E2’: the noise terms are such that {a,) + { e , ) has greater
`likelihood than a,) , but not necessarily R4L.
`Then wc have
`Pr ( E ) = Pr ( E 1 ) Pr (&2 I El) 5 Pr ( E l ) Pr (Ez’ I E l )
`(36)
`where Pr (Il) depends only on the coding scheme.
`Events El are generally not mutually exclusive. Note that
`in (36) conditioning of
`and &2‘ on &1 tightens the given
`bound, since prescribing &1 reduces the number of other
`events that could, when E2’ occurs, still have greater likeli-
`hood, so that &? mould not be satisfied. From (23) we
`conclude that Pr (E2’ 1 El) is the probability that
`J r ( { a n ) ) <Jz((a,I + {en)).
`(37)
`(16) into (23) , and observing (33) and
`By substituting
`sz = SVz, (37) becomes
`2
`l H H
`&si-kek < -Re [E &rj].
`P(&) A-
`so
`so a-0 k-0
`i-0
`We call 6 ( E ) the distance of E . The right-hand side of (35)
`is a normally distributed random variable with zero mean
`and, from (18) , (19), and ( 2 0 ) ) variance
`
`H
`
`(3s)
`
`Hence, observing ( 2 2 ) , the probability of
`
`(37) being
`
`satisfied is given by
`
`where
`
`We continue as indicated by Forney [lS]. Let E(6) be
`the subset of E containing all events
`E with distance
`6 ( & ) = 6 . Let A be the set of the possible values of 6.
`From (36) and (40) the probability that any error event
`& occurs becomes and is up

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket