`
`
`
`
`
`
`
`.mInXE
`32O1
`....H
`8 .83’8O1’6O.NtnetaP
`
`U
`
`Exhibit 1023
`U.S. Patent No. 6,108,388
`
`
`
`
`
`This book was set in Times Roman by Science Typographers, Inc.
`The editors were Alar E. Elken and John M. Morriss;
`
`the production supervisor was Janelle S. Travers.
`The cover was designed by Sharon Gresh.
`The drawings were done by Science Typographers, Inc.
`Project supervision was done by Science Typographers, Inc.
`R. R. Donnelley & Sons Company was printer and binder.
`
`DIGITAL COMMUNICATIONS
`
`Copyright © 1989, 1983 by McGraw-Hill, Inc. All rights reserved.
`Printed in the United States of America. Except as permitted under the
`United States Copyright Act of 1976, no part of this publication may be
`reproduced or distributed in any form or by any means, or stored in a data
`base or retrieval system, without prior written permission of the
`publisher.
`
`1234567890 DOC DOC 89432109
`
`ISBN El-D?==DSEI‘|3?-'=l
`
`Library of Congress Cataloging-in—Publication Data
`
`Proakis, John G.
`Digital communications/John G. Proakis.—2nd ed.
`p. cm.--(McGraw-Hi11 series in electrical engineering.
`Communications and signal processing)
`Includes bibliographies and index.
`ISBN 0-07-050937~9
`1.’\Digita1 communications.
`TK5103.7.P76
`1989
`621.38’0413--ch9
`
`88-31455
`
`1. Title.
`
`II. Series.
`
`
`
`
`
`Preface
`
`1 Probability and Stochastic Processes
`
`1.1
`
`Probability
`1.1.1 Random Variables, Probability Distributions, and Probability
`Densities
`1.1.2 Functions of Random Variables
`
`1.1.3 Statistical Averages of Random Variables
`1.1.4 Some Useful Probability Distributions
`1.1.5 Upper Bounds on the Tail Probability
`1.1.6 Sums of Random Variables and the Central Limit Theorem
`Stochastic Processes
`
`1.2.1 Statistical Averages
`1.2.2 Power Density Spectrum
`1.2.3 Response of a Linear Time-Invariant System to a Random Input
`Signal
`1.2.4 Sampling Theorem for Band-Limited Stochastic Processes
`1.2.5 Discrete-Time Stochastic Signals
`1.2.6 Cyclostationary Processes
`Bibliographical Notes and References
`Problems
`
`1.2
`
`1.3
`
`2 Elements of a Digital Communications System and
`Information Theory
`
`2.1 Model of a Digital Communications System
`2.2 A Logarithmic Measure for Information
`2.2.1 Average Mutual Information and Entropy
`2.2.2
`Information Measures for Continuous Random Variables
`
`2.3
`
`Sources, Source Models, and Source Encoding
`2.3.1 Coding for a Discrete Memoryless Source
`
`CONTENTS
`
`1%
`r“-
`
`XV
`
`1
`
`1
`
`6
`12
`
`17
`21
`34
`39
`44
`
`45
`49
`
`50
`53
`55
`56
`58
`58
`
`4
`63
`
`64
`67
`70
`74
`
`76
`77
`
`
`
`
`
`440
`
`DIGITAL COMMUNICATIONS
`
`RQ = log2
`
`decoding (Q = 2) of the M—ary symbols. In this case, we have
`
`M
`2
`Q = 2
`[m + (/(M — 1)(1— PM)]
`where PM is the probability of a symbol error. For a relatively broad range of
`rates, the difference between soft- and hard-decision decoding is approximately
`2 dB.
`
`(5.2.140)
`
`The most striking characteristic of the performance curves in Fig. 5.2.23 is
`that there is an optimum code rate for any given M. Unlike the case of coherent
`detection where the SNR per bit decreases monotonically with a decrease in code
`rate,
`the SNR per bit for noncoherent detection reaches a minimum in the
`vicinity of a normalized rate of 0.5, and increases for both high and low rates.
`The minimum is rather broad, so there is really a range of rates from 0.2 to 09
`where the SNR per bit
`is within 1 dB of the minimum. This characteristic
`behavior in the performance with noncoherent detection is attributed to the
`nonlinear characteristic of the demodulator.
`
`Interleaving of Coded Data for Channels
`5.2.10
`with Burst Errors
`
`Most of the well known codes that have been devised for increasing the reliability
`in the transmission of information are effective when the errors caused by the
`channel are statistically independent. This is the case for the AWGN channel.
`However, there are channels that exhibit bursty error characteristics. One exam-
`ple is the class of channels characterized by multipath and fading, which is
`described in detail
`in Chap. 7. Signal fading due to time—variant multipath
`propagation often causes the signal to fall below the noise level, thus resulting in
`a large number of errors. A second example is the class of magnetic recording
`channels (tape or disk) in which defects in the recording media result in clusters
`of errors. Such error clusters are not usually corrected by codes that are optimally
`designed for statistically independent errors.
`Considerable work has been done on the construction of codes that are
`capable of correcting burst errors. Probably the best known burst error correcting
`codes are the subclass of cyclic codes called Fire codes, named after P. Fire
`(1959) who discovered them. Another class of cyclic codes for burst error
`correction were subsequently discovered by Burton (1969).
`A burst of errors of length b is defined as a sequence of 19-bit errors, the
`first and last of which are 1’s. The burst error correction capability of a code is
`defined as one less than the length of the shortest uncorrectable burst. It is
`relatively easy to show that a systematic (n, k) code, which has n — k parity
`check bits, can correct bursts of length b s [(n — k)/2].
`An effective method for dealing with burst error channels is to interleave
`the coded data in such a way that the bursty channel is transformed into a
`channel having independent errors. T‘éius, a code designed for independeflt
`channel errors (short bursts) is used.
`
`——
`
`
`
`EFFICIENT SIGNALING WITH CODED WAVEFORMS
`
`441
`
`
`
`
`
`MOdUIalor
`
`
`DemOdUl'dtOr
`
`
`
`Deinterleaver
`
`
`
`Channel
`encoder
`
`Chfmnd
`decoder
`
`
`
`FIGURE 5.2.24
`
`Block diagram of system employing interleaving for burst—error channel.
`
`A block diagram of a system that employs interleaving is shown in Fig.
`5.2.24. The encoded data are reordered by the interleaver and transmitted over
`the channel. At the receiver, after (either hard or soft decisions) demodulation,
`the deinterleaver puts the data in proper sequence and passes it to the decoder.
`As a result of the interleaving/deinterleaving, error bursts are spread out in time
`so that errors within a code word appear to be independent.
`The interleaver can take one of two forms—a block structure or a convolu-
`tional structure. A block interleaver formats the encoded data in a rectangular
`array of m rows and 11 columns. Usually, each row of the array constitutes a
`code word of length n. An interleaver of degree m consists of m rows (m code
`words) as illustrated in Fig. 5.2.25. The bits are read out column-wise and
`transmitted over the channel. At the receiver, the deinterleaver stores the data in
`the same rectangular array format, but it is read out row-wise, one code word at a
`time. As a result of this reordering of the data during transmission, a burst of
`errors of length l = mb is broken up into m bursts of length b. Thus, an (n, k)
`code that can handle burst errors of length b s [(n — k)/2] can be combined
`with an interleaver of degree m to create an interleaved (mn, mk) block code
`that can handle bursts of length mb.
`A convolutional inlerleaver can be used in place of a block interleaver in
`much the same way. Convolutional interleavers are better matched for use with
`the class of convolutional codes that
`is described in the following section.
`Convolutional interleaver structures have been described in the literature by
`Ramsey (1970) and Forney (1971).
`
`4Imus-‘1”
`
`.0"
`
`-x...-
`.mu
`
`
`
`'=o.c.''.\i-'{:'=E:.'i-Jdr-i‘:\,-."a;
`
`5.3 CONVOLUTIONAL CODES
`
`A convolutional code is generated by passing the information sequence to be
`transmitted through a linear finite-state shift register. In general, the shift register
`consists of L (k-bit) stages and 12 linear algebraic function generators, as shown
`
`
`
`
`
`
`
`DIGITAL COMMUNICATIONS
`
`From (6.2.66) we observe that the discrete-time filter serves to shape the
`spectrum of the transmitted signal. The analog filter, however, does no spectral
`shaping at all. For example,
`in the duobinary pulse the coefficients of the
`discrete-time filter are x0 = x1 = 1, and x” = 0 otherwise. The resulting power
`density spectrum is
`<I>(f) = I1 + 6"”le
`(6.2.67)
`= 40052 wa = 4cos2 % |f| S W
`this power density spectrum is identical to that obtained by exciting an
`But
`analog filter which has the frequency response
`X(f) = 2003 27%
`by a sequence of uncorrelated symbols {An}.
`The major difference in these two formulations is that in one the spectral
`shaping is obtained by designing the pulse shape x(t), while in the second the
`spectral shaping is achieved by the correlation properties of the {3"}. Conse-
`quently the design of a partial-response signal pulse is equivalent to the problem
`of selecting the coeflicients in the discrete—time filter shown in Fig. 6.2.10 to
`achieve a desirable correlation function or power density spectrum.
`
`|f| s W
`
`(6.2.68)
`
`6.3. OPTIMUM DEMODULATGR FOR
`INTERSYMBOL INTERFERENCE ANE
`ADDITIVE GAUSSIAN NOISE
`
`In this section we derive the structure of the optimum demodulator for digital
`transmission through a nonideal band-limited channel with additive gaussian
`noise. We begin with the transmitted signal given by (6.2.1). The received signal,
`in turn, is
`
`r(t) = ZInhU — nT) + z(t)
`
`(6.3.1)
`
`where h(t) represents the response of the channel to the input signal pulse u(t)
`and 2(t) represents the additive white gaussian noise.
`First we demonstrate that the optimum demodulator can be realized as a
`filter matched to h(t), followed by a sampler operating at the symbol rate 1/T
`and a subsequent processing algorithm for estimating the information sequence
`{In} from the sample values. Consequently the samples at the output of the
`matched filter are sufficient for
`the estimation of
`the sequence {In}. The
`Karhunen—Loeve expansion provides the means for deriving the optimum (maxi-
`mum-likelihood) demodulator.
`
`Maximum-likelihood demodulator. Following the procedure described in Ap-
`pendix 4A, we expand the received signal r(t) in the series
`‘
`N
`r(t) = lim 2 '7.ka)
`N“°° k=1
`
`(63-2)
`
`|nl-II—u'-—I-FfV-I-w:mil-w“um-0-
`
`
`_;.__.a.—;..§_I~._"5ou.'n.-..-.-
`
`_..l.-
`
`
`
`
`
`
`
`
`
`DIGITAL SIGNALING OVER A BANDWIDTH-CONSTRAINED LINEAR FILTER CHANNEL
`
`where the { fk(t)} are the set of orthogonal eigenfunctions of the integral
`equation in the Karhunen—Loéve expansion method and the {rk} are the
`observable random variables obtained by projecting r(t) onto the set of or-
`thonormal functions. It is easily shown that
`
`rk = 21,11,“ + zk
`
`k = 1,2,...
`
`(6.3.3)
`
`where hkn is the value obtained from projecting h(t — nT) onto fk(t) and 2k is
`the value obtained from projecting z(t) onto fk(t). The sequence {2k} is
`gaussian with zero mean and covariance %E(z,j‘zm) = Ak8km where the {Am} are
`the eigenvalues of the integral equation.
`The joint probability density function of the random variables r1, r2, .
`rN E rN conditioned on the transmitted sequence 11, [2, .
`. ., IP E Ip is
`
`. ,
`
`.
`
`17(1'Nllp) = (Elzflkk)
`
`exp(_%k§1l”k _ zlnhanZ/Ak)
`
`(6-3-4)
`
`In the limit as the number N of observable random variables approaches infinity,
`the logarithm of p(rN|Ip) is proportional to the quantity J0(Ip), defined as
`2
`
`J0(Ip) = ‘fc:
`
` r(t) — 21,1110 — HT)
`
`
`
`dt
`
`—foo [r(t) [2 dt + 2Re Z [5*]? r(l‘)h*(t — nT) dt
`
`00
`— 2 21m] h*(t — nT)h(t — mT) dt
`11 m
`“00
`
`(6.3.5)
`
`The maximum-likelihood estimates of the symbols 11,12,..., IP are those
`that maximize this quantity. We note, howeVer, that the integral of |r(t)|2 is
`independent of the information sequence {In} and, hence, it may be discarded.
`The other integral involving r(l) gives rise to the variables
`
`y, Ey(nT) = f” r(t)h*(t — nT) dt
`
`—00
`
`(6.3.6)
`
`These variables can be generated by passing r(t) through a filter matched to h(t)
`and sampling the output at the symbol rate 1 / T. The samples { yn} form a set of
`sufficient statistics for the computation of J0(Ip) or, equivalently, of the quantity
`J(Ip) = 2Re(21,,*y,,) — Z 21,,*1mx,,_m
`(6.3.7)
`
`where, by definition, x(t) is the response of the matched filter to h(t) and
`
`x, 2 my") = f” h*(t)h(t + nT) dt
`
`—00
`
`(6.3.8)
`
`Hence x(t) represents the output ~of a filter having an impulse response h*(—t)
`
`m-
`
`
`
`
`
`550
`
`DIGITAL COMMUNICATIONS
`
`
`
`.‘IC...'
`
`and an excitation h(t). In other words, x(t) represents the autocorrelation
`function of h(t). Consequently {xn} represents the samples of the autocorrela-
`tion function of h(t), taken periodically at 1/T. We are not particularly con-
`cerned with the noncausal characteristic of the filter matched to h(t) since in
`practice we can introduce a sufficiently large delay to ensure causality of the
`
`matched filter.
`sequence
`The brute force method for determining the particular
`11,12,13,...,I
`that maximizes (6.3.7) is the exhaustive method. That is, we
`evaluate J(Ip) for all possible sequences and select the sequence that yields the
`largest J(Ip). This signal processing method has been termed maximum-likeli-
`hood sequence estimation (MLSE). A more efficient computational method for
`implementing MLSE is the Viterbi algorithm, which has already been described
`for decoding of convolutional codes and which will be described again in this
`chapter in the context of dealing with intersymbol interference.
`In addition to the optimum MLSE method specified by (6.3.7), there are
`also a number of other algorithms for processing the sampled matched filter
`outputs. By design, these other algorithms employ symbol-by-symbol estimation
`of the information sequence {In }, whereas in MLSE the estimation is performed
`on the entire sequence. In order to describe these algorithms, we deal with the
`signal given by (6.3.6). If we substitute for r(t) in (6.3.6) using the expression
`given in (6.3.1), we obtain
`
`yk = Zlnxk_,, + pk
`rt
`
`(6.3.9)
`
`where vk denotes the additive noise sequence of the output of the matched filter,
`i.e.,
`00
`
`y, =f z(t)h*(t — kT) dt
`
`(6.3.10)
`
`By expressing yk in the form
`
`
`
`(6.3.11)
`yk = x0(Ik + 36— Z Inxk_n) + vk
`we observe that Ik is the desired symbol at the kth sampling instant, the term
`
`1
`
`0 n¢k
`
`I x _
`
`1—
`
`n
`k
`x0 ngk n
`represents the intersymbol interference, and 11k is the additive noise component-
`Since the intersymbol interference term depends on the particular transmitth
`sequence, it is a random variable at any sampling instant. In high-speed transmis-
`sion on telephone channels and on some radio channels this term is often
`sufliciently large to mask the desired signal component 1k. Under these circum'
`stances, a decision about the symbol
`Ik cannot be made with a small error
`probability if that decision is based entirely on the single received variablC yk‘
`Consequently even the symbol—by-symbol estimation (equalization) methods that
`
`En..-
`
`
`
`DIGITAL SIGNALING OVER A BANDWIDTH-CONSTRAINED LINEAR FILTER CHANNEL
`
`551
`
`
`
`XL
`
`{yn}
`
`1'-
`ll'
`a;5|
`
`.
`
`I}
`E
`i
`'13
`
`F5
`i:
`
`:7
`E-
`
`§~
`
`i
`In
`
`z“1 = delay of T
`
`I
`
`’
`
`I
`
`l
`
`l
`
`.
`l
`l
`
`;
`
`;
`5.
`
`{Va}
`
`FIGURE 6.3.1
`Equivalent discrete-time model of channel with intersymbol interference.
`are described below are based on processing several samples of the received
`
`signal { yk} at a time in order to estimate a single symbol. Before we begin our
`discussion of these techniques, it is desirable, for mathematical convenience, to
`introduce and equivalent discrete-time model for the intersymbol interference.
`A discrete-time model for intersymbol interference. Since the transmitter sends
`discrete-time symbols at a rate 1/T symbols/s and the sampled output of the
`matched filter at the receiver is also a discrete-time signal with samples occurring
`at a rate 1/T per second, it follows that the cascade of the analog filter at the
`transmitter with impulse response u(t), the channel with impulse response C(t),
`the matched filter at the receiver with impulse response h*(—t), and the sampler
`can be represented by an equivalent discrete-time tranversal filter having tap gain
`coefficients {xk }. We assume that xk = 0 for |k| > L, where L is some arbitrary
`positive integer. In spite of the restriction that the channel is band-limited, this
`assumption holds in all practical communications systems. Consequently we have
`an equivalent discrete-time transversal filter that spans a time interval of 2LT
`seconds. Its input is the sequence of information symbols {1k} at the output of
`the matched filter. That is, the set of noise variables { vk} and its output is the
`discrete-time sequence { yk} given by (6.3.9). The equivalent discrete-time model
`is shown in Fig. 6.3.1.
`The major difiiculty with this discrete-time model occurs in the evaluation
`of performance of the various equalization or estimation techniques that are
`discussed in the following sections. The difficulty is caused by the correlations in
`the noise sequence {vk} at the output of the matched filter. That is, the set of
`noise variables {vk} is a gaussian-distributed sequence with zero mean and
`
`
`
`552
`
`DIGITAL COMMUNTCATIONS
`
`autocorrelation function
`
`mm) = {
`
`0
`
`_
`otherwrse
`
`(6.3 .12)
`
`Since it is more convenient to deal with the white-noise sequence when calculat-
`ing the error rate performance, it is desirable to whiten the noise sequence by
`further filtering the sequence { yk}. A discrete—time noise-whitening filter is
`determined as follows.
`Let X(z) denote the (two-sided) z transform of the sampled autocorrela-
`tion function {xk}, i.e.,
`
`L
`
`X(z) = Z xkz'k
`= «L
`
`(6.3.13)
`
`Since xk = xfk, it follows that X(z) = X*(z‘1) and the 2L roots of X(z) have
`the symmetry that if p is a root, 1/p* is also a root. Hence X(2) can be factored
`and expressed as
`
`X(z) = F(Z)F*(Z_l)_‘
`
`(6.3.14)
`
`where F(z) is a polynomial of degree L having the roots p1, p2,..., pL and
`F*(z’1) is a polynomial of degree L having the roots 1/pf,1/p3‘,...,1/pf.
`Then an appropriate noise-whitening filter has a z transform 1/F*(z‘1). Since
`there are 2L possible choices for the roots of F*(z_1), each choice resulting in a
`filter characteréstic that is identical in magnitude but different in phase from
`other choices of the roots, we propose to choose the unique F*(z‘1) having
`minimum phase, i.e., the polynomial having all its roots inside the unit circle.
`Thus, when all the roots of F*(z’1) are inside the unit circle 1/F*(z‘1) is a
`physically realizable, stable, recursive discrete-time filterff Consequently passage
`of the sequence { yk} through the digital filter 1/F*(z‘l) results in an output
`sequence {0k} which can be expressed as
`L
`
`
`
`r'
`
`.
`
`III
`
`Uk = Z ntk—n + T’k
`n=0
`
`where {17k} is a white gaussian noise sequence and { fk} is a set of tap
`coefficients of an equivalent discrete-time transversal filter having a transfer
`function F(z).
`In summary, the cascade of the transmitting filter u(t), the channel C(t),
`the matched filter h*(—t), the sampler, and the discrete-time noise-whitening
`filter 1/F*(z_1) can be represented as an equivalent discrete-time transversal
`filter having the set { fk} as its tap coefficients. The additive noise sequence {11k}
`corrupting the output of the discrete—time transversal filter is a white gaussian
`noise sequence having zero mean and variance N0. Figure 6.3.2 illustrates the
`
`
`
`
`
`TBy removing the stability condition, we can also show F*(z‘l) to have roots on the unit circle,
`
`
`
`
`
` z“1 = delay of T
`
`:it.
`;
`
`a?
`-:
`i.‘
`
`DIGITAL SIGNALING OVER A BANDWIDTH‘CONSTRAINED LINEAR FILTER CHANNEL
`
`{Vk}
`
`FIGU?E 6.3.2
`
`Equivalent discrete-time model of intersymbol interference-channel with WGN.
`
`{Wk}
`
`model of the equivalent discrete-time system with white noise. We refer to this
`model as the equivalent discrete—time white-noise filter model.
`For example, suppose that the transmitter signal pulse u(t) has duration T
`and unit energy and the received signal pulse is h(t) = u(t) + au(t — T). Then
`the sampled autocorrelation function is given as
`
`a*
`
`k = —1
`
`Xk =
`
`1 + lalz
`61
`
`k = 0
`k =1
`
`(6.3.16)
`
`The 2 transform of xk is
`
`1
`
`X(z)= Z xkz_k
`k=—1
`
`= a*z + (1 + |a|2) + (12—1
`
`= (612‘1 + 1)(a*z + 1)
`
`Under the assumption that |a| > 1, one chooses F(z) = az‘1 + 1 so that the
`equivalent
`transversal filter consists of two taps having tap gain coelficients
`f0 = 1, f1 = a. We note that the correlation sequence {xk} may be expressed in
`terms of the { f } as
`
`L—k
`
`‘
`
`xk= 232M k=0,1,2,...,L
`n=0
`
`(6.3.17)
`
`
`
`
`
`
`
`
`
`..3..._
`
`554
`
`DIGITAL COMMUNICATIONS
`
`the
`impulse response is changing slowly with time,
`When the channel
`matched filter at the receiver becomes a time-variable filter. In this case, the time
`variations of the channel/matched-filter pair result in a discrete—time filter with
`time-variable coefficients. As a consequence we have time-variable intersymbol
`interference effects which can be modeled by the filter illustrated in Fig. 63.2,
`where the tap coefficients are slowly varying with time.
`The discrete-time white-noise linear filter model for the intersymbol inter-
`ference effects that arise in high-speed digital transmission over nonideal band-
`limited channels will be used throughout the remainder of this chapter in our
`discussion of compensation techniques for the interference.
`In general,
`the
`compensation methods are called equalization techniques or equalization algo—
`rithms.
`
`Until about 1965 the word “equalizer” was used to describe a linear filter
`that compensates (equalizes) for the nonideal frequency response characteristic of
`the channel. Today the word “equalizer” has a broader connotation. It is used to
`describe any device or signal processing algorithm that is designed to deal with
`intersymbol interference. Several different types ‘of equalization techniques are
`described in Sec. 6.4. We begin our discussion with the simplest, namely, a linear
`equalizer.
`
`6.4 LINEAR EQUALIZATION
`
`The linear filter most often used for equalization is the transversal filter shown in
`Fig. 6.4.1. Its input is the sequence {vk} given in (6.3.15) and its output is the
`estimate of the information sequence {1k }. The estimate of the kth symbol may
`be expressed as
`
`I;
`
`‘2 cjuk_j
`
`(6.4.1)
`
`;
`
`where {cj} are the (2K + 1) tap weight coefficients of the filter. The estimate fk
`is quantized to the nearest (in distance) information symbol to form the decision
`Ik. If [k is not identical to the transmitted information symbol Ik, an error has
`been made.
`
`Considerable research has been performed on the criterion for optimizing
`the filter coefficients {ck }. Since the most meaningful measure of performance for
`a digital communications system is the average probability of error, it is desirable
`to choose the coefficients to minimize this performance index. However,
`the
`probability of error is a highly nonlinear function of {cj}. Consequently, the
`probability of error as a performance index for optimizing the tap weight
`coefficients of the equalizer is impractical.
`Two criteria have found widespread use in optimizing the equalizer coeffi-
`cients {cf}. One is the peak distortion criterion and the other is the mean-
`square-error criterion.
`
`
`
`DIGITAL SIGNALING OVER A BANDWIDTH—CONSTRAINED LINEAR FILTER CHANNEL
`
`Unequalized
`
`1Input _1
`
`
`
`
`
`
`
`Equalized
`output
`
`
`
`
`—"1
`
`Algorithm for tap
`
`gain adjustment
`
`
`
`
`
`FIGURE 6.4.1
`Linear transversal filter.
`
`6.4.1 Peak Distortion Criterion
`and the Zero-Facing Algorithm
`
`The peak distortion is simply defined as the worst-case intersymbol interference
`at the output of the equalizer. The minimization of this performance index is
`called the peak distortion criterion. First we consider the minimization of the peak
`distortion assuming that the equalizer has an infinite number of taps. Then we
`will discuss the case in which the transversal equalizer spans a finite time
`duration.
`We observe that the cascade of the discrete-time linear filter model having
`an impulse response { fn} and an equalizer having an impulse response {on} can
`be represented by a single equivalent filter having the impulse response
`
`I:
`
`cl”
`
`M J.”
`
`3 4
`
`(6.4.2)
`
`That is, {qn} is simply the convolution of {CH} and {fn}. The equalizer is
`assumed to have an infinite number of taps. Its output at the kth sampling
`instant can be expressed in the form
`
`I; = 401k + Z Ian-n + _Z 61471;]
`
`(6.4.3)
`
`The first term in (6.4.3) represents a scaled version of the desired symbol.
`For convenience, we normalize go to unity. The second term is the intersymbol
`
`
`
`
`
`556
`
`DIGITAL COMMUNICATIONS
`
`interference. The peak value of this interference, which is called the peak
`distortion, is
`
`00
`
`D= Z anl
`n=-oo
`n=#0
`00
`
`00
`
`= z"=‘OO
`naEO
`
`
`
`Z cjfn—j
`j=—oo
`
`
`
`(6.4.4)
`
`Thus D is a function of the equalizer tap weights.
`With an equalizer having an infinite number of taps, it is possible to select
`the tap weights so that D = 0, Le, qn = 0 for all 11 except n = 0. That is, the
`intersymbol interference can be completely eliminated. The values of the tap
`weights for accomplishing this goal are determined from the condition
`
`°°
`
`qn = z cjfn—j =
`
`j= —-00
`
`1
`
`= 0
`
`Zak O
`
`By taking the z transform of (6.4.5), we obtain
`Q(z) = C(z)F(z) = 1
`
`or, simply,
`
`
`I
`
`(6.4.6)
`
`where C(z) denotes the z transform of the {cj}. We note that the equalizer, with
`transfer function C(z), is simply the inverse filter to the linear filter model F(2).
`In other words, complete elimination of the intersymbol interference requires the
`use of an inverse filter to F(2). We call such a filter a zero-forcing filter. Figure
`6.4.2 illustrates in block diagram the equivalent discrete-time channel and equal-=
`12er.
`
`
`
`
`
`The cascade of the noise-whitening filter having the transfer function
`1/F*(z'1) and the zero-forcing equalizer having the transfer function 1/F(z)
`results in an equivalent zero-forcing equalizer having the transfer function
`
`1
`
`(6.4.8)
`
`I
`__
`1
`_
`C(Z) _ F(Z)F*(Z—1) ’—
`
`Channel
`F(2)
`
`
`
`
`Equalizer
`tik}
`C“) = Fiz)
`
`
`
`AWGN
`
`{71k }
`
`FIGURE 6.4.2
`
`Block diagram of channel with zero-forcing equalizer.
`
`
`
`DIGITAL SIGNALING OVER A BANDWIDTH~CONSTRAINED LINEAR FILTER CHANNEL
`
`Channel
`
`
`X(Z) = F(Z)F*(Z‘l)
`
`
`
`{ Vk}
`
` Gaussian noise
`
`FIGURE 6.4.3
`
`samples from the matched filter, given by (6.3.9). Its output consists of the
`desired symbols corrupted only by additive zero mean gaussian noise. The
`impulse response of the combined filter is
`
`
`
` 1 Zk—l
`
`-mfimz) dz
`
`(649)
`
`intersymbol interference can be expressed in terms of the signal-to-noise ratio
`(SNR) at its output. For mathematical convenience we normalize the received
`signal power to unity. This implies that q0 = 1 and that the expected value of
`lIkl2 is also unity. Then the SNR is simply the reciprocal of the noise variance of
`at the output of the equalizer.
`The value of of can be simply determined by observing that the noise
`sequence { vk} at the input to the equivalent zero-forcing equalizer C’(z) has a
`zero mean and a power spectral density
`
`(6.4.10)
`[to] S T
`(13W(w) = N0X(ej“’T)
`is obtained from X(z) by the substitution 2 = e/“T. Since
`where X(ej“’T)
`C’(z) = 1/X(z), it follows that the noise sequence at the output of the equalizer
`has a power spectral density
`
`.
`
`77'
`
`
`N0
`
`(1),,”(w): flew)
`
`[to] g %
`
`(6.4.11)
`
`
`
`
`
`
`
`Consequ
`
`DIGITAL COMMUNICATIONS
`ently the variance of the noise variable at the output of the equalizer is
`T
`n/T
`2 _ __
`c"n — 2.“- f_fl/Tq)nn(w) dw
`
`TN0 m dw
`
`— 277 f—w/TX(ej°’T)
`
`(6.4.12)
`
`and the SNR for the zero-forcing equalizer is
`1
`Yea—1’3
`On
`
`'1
`
`6.4.
`
`def
`TNO w/T
`(
`( 2r”- frW/TX(eJmT))
`= — ———.——
`where the subscript on 7 indicates that the equalizer/has an infinite number of
`taps.
`
`) corresponding to the Fourier trans-
`The spectral characteristics X(ej“’T
`sting relationship to the analog
`form of the sampled sequence {xn} has an intere
`filter H(60) used at the receiver. Since
`xk = [w h*(t)h(t + kT) dt
`
`use of Parseval’s theorem yields
`(6.4.14)
`xk = %f_le(w) 126']ka dw
`where H(w) is the Fourier transform of h(t). But the integral in (6.4.14) can be
`expressed in the form
`
`44651.:
`
`
`
`Now the Fourier transform of {xk} is
`X(ej‘°T) = E xke‘jwkT
`
`and the inverse transform yields
`xk = gig-[WT X(ej“’T)ej°°kT doc
`
`2
`
`161 s %
`
`
`
`‘w/T
`From a comparison of (6.4.15) and (6.4.17), we obtain the desired relationship
`between X(ej“’T) and H(w). That is
`
`X(ej‘°T) =—;.- Z H(w +
`
`where the right-hand side of (6.4.18) is called the folded spectrum of |H(w)|2. We
`
`
`
`“4-15)
`
`(6.4.16)
`
`(6.4.17)
`
`(6.4.18)
`
`
`
`
`
`_..__—....___.___._~..____..
`
`DIGITAL SIGNALING OVER A BANDWIDTH-CONSTRAINED LINEAR FILTER CHANNEL
`
`also observe that ]H(w)[2 = X( m), where X(co) is the Fourier transform of the
`waveform x(t) and x(t) is the response of the matched filter to the input h(t).
`Therefore the right-hand side of (6.4.18) can also be expressed in terms of X(to).
`Substitution for X(ej“’T) in (6.4.13) using the result in (6.4.18) yields the
`desired expression for the SNR in the form
`
`7‘” =
`
`d
`
`0°
`
`2
`
` TZN 77
`‘°
`0f ”
`277
`._.,,/T Z 'H(w + 2777:!)
`
`I’l=_00
`
`—1
`
`(6.4.19)
`
`We observe that if the folded spectral characteristic of H(w) possesses any zeros,
`the integrand becomes infinite and the SNR goes to zero. In other words, the
`performance of the equalizer is poor whenever the folded spectral characteristic
`possesses nulls or takes on small values. This behavior occurs primarily because
`the equalizer, in eliminating the intersymbol interference, enhances the additive
`noise. For example,
`if the Channel contains a spectral null
`in its frequency
`response,
`the linear zero—forcing equalizer attempts to compensate for this by
`introducing an infinite gain at
`that frequency. But
`this compensates for the
`channel distortion at the expense of enhancing the additive noise. On the other
`hand, an ideal channel coupled with an appropriate signal design that results in
`no intersymbol interference will have a folded spectrum that satisfies the condi-
`tion
`
`0°
`
`2
`
`2mt
`
`E (H(«; + T) = T
`
`
`77
`
`[03! g 7
`
`(6.4.20)
`
`(6.4.21)
`
`In this case the SNR achieves its maximum value, namely,
`
`yoo = i
`N0
`
`Let us now turn our attention to an equalizer having (2K + 1) taps. Since
`cj = 0 for l j | > K, the convolution of { fn} with {en} is zero outside the range
`—K_<.nsK+L—1.Thatis, qn=0f0rn< ~Kandn>K+L—1.With
`qO normalized to unity, the peak distortion is
`K+L—l
`
`D: Z lqnl
`
`ll
`
`Mt
`
` chfn -, i
`
`(6.4.22)
`
`Although the equalizer has (2K + 1) adjustable parameters, there are 2K + L
`nonzero values in the response {qn}. Therefore it
`is generally impossible to
`eliminate completely the intersymbol interference at the output of the equalizer.
`
`
`
`
`
` I l _..
`
`
`
`_..._."man—n”.
`
`560
`
`DIGITAL COMMUNICATIONS
`
`There is always some residual interference when the optimum coefficients are
`used. The problem is to minimize D with respect to the coefficients {0/}.
`The peak distortion given by (6.4.22) has been shown by Lucky (1965) to be
`a convex function of the coefficients {cj That is, it possesses a global minimum
`and no relative minima. Its minimization can be carried out numerically using,
`for example,
`the method of steepest descent. Little more can be said for the
`general solution to this minimization problem. However, for one special but
`important case, the solution for the minimization of D is known. This is the case
`in which the distortion at the input to the equalizer, defined as
`
`L
`1
`Do = — Z lfnl
`lfol n=1
`
`(6-4-23)
`
`is less than unity. This condition is equivalent to having the eye open prior to
`equalization. That is, the intersymbol interference is not severe enough to close
`the eye. Under this condition, the peak distortion D is minimized by selecting the
`equalizer coefficients to force q" = 0 for 1 s |rz| s K and go = 1.
`In other
`words,
`the general solution to the minimization of D, when D0 < 1,
`is the
`zero-forcing solution for {qn} in the range 1 s |n| s K. However, the values of
`{qn} for K + 1 s n s K + L — 1 are nonzero, in general. These nonzero values
`constitute the residual intersymbol interference at the output of the equalizer.
`The zero-forcing solution when D0 < 1 can be obtained in practice with the
`following steepest-descent recursive algorithm for adjusting the equalizer coeffi-
`cients {cj }:
`
`J: —K,..., —1,0,1,...,K 6.4.24
`
`c<k+1>=c<k)+ As 1*_.
`J
`J
`k k J
`where cjk) is the value of the j th coefficient at time t = kT, ek = Ik — fk is the
`error signal at time t= kT, and A is a scale factor that controls the rate of
`adjustment, as will be explained later in this section. This is the zero-forcing
`algorithm. In effect, by cross-correlating the error sequence {5k} with the desired
`sequence {1k} we attempt to force the cross correlations eka*_j, j = —K, .
`. ., K,
`to zero. In other words, the coefficients are optimally adjusted when the error is
`orthogonal to the sequence {. Ik }. The demonstration that this leads to the desired
`solution is quite simple. We have
`E(8k1k*—j) = E[(Ik — ik)Ik*—j]
`j =’ —K,...,K (6.4.25)
`= E(Ik1k*_j) — E(fk1k*_j)
`We assume that the information symbols are uncorrelated, i.e., E (Ik]J.*) = Bk],
`and that the information sequence {1k} is uncorrelated with the additive noise
`sequence {71k}. For fk we use the expression