throbber
A Viterbi Algorithm With Soft—Decision Outputs
`and its Applications
`
`Joachim Hagenauer
`Peter Hoeher
`
`German Aerospace Research Establishment (DLR)
`Institute for Communications Technology
`D-8031 Oberpfaflenhofen, West-Germany
`Tel. ++49/8153/28—893, Fax ++49/8153/28-243
`
`
`
`schemes. An additional outer coding system could use
`convolutional codes with Viterbi decoding to perform
`forward error—correction (FEC) decoding.
`There are normally two drawbacks with such a so—
`lution: first, the inner VA for demodulation produces
`bursts of errors against which the outer.VA is very sen-
`sitive; second, the inner VA produces hard decisions
`prohibiting the outer VA from using its capability to
`accept soft decisions. The first drawback can be elim—
`inated by using some interleaving between the inner
`and outer VA. To eliminate the second drawback, the
`inner VA needs to output soft-decisions, i.e., reliabil-
`ity information. This should improve the performance
`of the outer VA considerably.
`Another important situation where a similar prob—
`lem arises is when convolutional codes for FEC are
`used on channels requiring equalization. This is the
`case in the future pan—european mobile radio system
`(GSM)
`The normal Viterbi equalizer produces
`only hard decisions, leading to a reduced performance
`of the outer VA performing the FEC.
`The performance of the above mentioned systems,
`and other systems such as multistage Viterbi schemes,
`FEC/ARQ schemes etc., will improve by using a Soft—
`Output Viterbi Algorithm (SOVA). This is a VA which
`uses soft (or hard) decisions to calculate its metrics,
`but also decides in a soft way by providing reliability
`information together with the output bits. The relia-
`bility information can be the log—likelihood function.
`We wish to modify the VA as little as possible. The
`goal is to add to the Viterbi receiver a soft-deciding
`unit.
`In earlier work Forney considered “augmented
`outputs" from the VA itself [1], such as the depth at
`which all paths are merged, the difference in length
`between the best two paths, and a list of the p best
`paths. The last topic was generalized in
`Ya-
`
`Abstract
`
`The Viterbi algorithm (VA) is modified to deliver not
`only the most likely path sequence in a finite—state
`markov chain, but either the a posteriori probability
`for each bit or a reliability value. With this reliability
`indicator the modified VA produces soft decisions to
`be used in decoding of outer codes. The inner Soft—
`Output Viterbi Algorithm (SOVA) acCepts and deliv-
`ers soft sample values and can be regarded as a device
`for improving the SNR, similar to an FM demodula—
`tor. Several applications are investigated to show the
`gain over the conventional hard—deciding VA, includ-
`ing concatenated convolutional codes, concatenation
`of convolutional and simple block codes, concatena-
`tion of Trellis-Coded Modulation with convolutional
`FEC codes, and coded Viterbi equalization. For these
`applications we found additional gains of 1 — 4 dB as
`compared to the classical hard-deciding algorithms.
`For comparison, we also investigated the more com-
`plex symbol—by—symbol MAP algorithm whose opti-
`mal a posteriori probabilities can be transformed into
`soft outputs.
`
`1
`
`Introduction
`
`The Viterbi algorithm [1] has become a standard tool
`in communication receivers, performing such func-
`tions as demodulation, decoding, equalization, etc.
`An increasing number of applications use two VA in
`a Concatenated way. Examples are coded modula-
`tion systems without bandwidth expansion, such as
`coded quadrature amplidude modulation (QAM) [2]
`and continuous phase modulation (CPM)
`In these
`systems Viterbi receivers replace classical modulation
`
`47.1.1.
`
`CH2682-3/89/0000-1680 $1.00 © 1989 IEEE
`
`Exhibit 1026
`U.S. Patent No. 6,108,388
`
`

`

`where my”) = :Izl is the k-th symbol of the m—th in-
`formation sequence.
`flk is the hard (:tl) decision of
`the first Viterbi detector. We will call the first stage
`VA a Soft—Output Viterbi Algorithm (SOVA) because
`it delivers soft decisions
`
`Ar = fikik;
`
`13!: = 108
`
`1 — Pk
`13):
`
`Z 0
`
`(3)
`
`(4)
`
`to be processed by the next ML—detector stage.
`
`2.2 Algorithms
`
`Given a finite—state discrete—time Markov process ob-
`served in white noise, the VA is the optimal recur-
`sive algorithm.
`It provides the state sequence with
`the lowest cost
`The VA is optimal in the ML
`sequence sense.
`It determines the symbol sequence
`ii = {ilk} that maximizes the logelikelihood function
`logpbrlfil-
`
`2.2.1 The Soft-Output Symbol—by—Symbol
`NIAP Algorithm
`
`There exists an optimum general algorithm providing
`the a posteriori probabilities (APP’S) for each bit to be
`decided [10], [1, Appendix]. This symbolvbyisymbol
`MAP algorithm was originally developed to minimize
`the bit—error probability instead of the sequence error
`probability. The algorithm seems less attractive than
`the VA due to the increased complexity. However, we
`easily obtain the optimal APPs P(u;c | y) for each bit
`to be decided. A soft decision as the logilikelihood
`ratio is obtained by
`
`P (a): I Y)
`“
`A :10 ———A—-—,
`gl—Pmly)
`"
`
`5
`(’
`
`where ilk is the MAP estimate for which P(uk | y) is
`maximum. The probability P(u.;c I y) can be calcu-
`lated by a forward and a backward recursion following
`[10] and [
`
`2.2.2 The Soft—Output Viterbi Algorithm
`(SOVA)
`
`We shall first give a motivation for the algorithm be-
`fore we state it formally. For the sake of simplicity we
`restrict ourselves to trellises with two branches ending
`in each node. This could include a K/N code punc—
`tured from a l/N code because it still uses the trellis
`of the l/N code [11], [12]. The number of states 5 is
`S : 2", where 1/ is the code memory.
`
`mamoto et al. derived a simple indicator for block
`errors by introducing labels
`This scheme is re-
`stricted to requesting a retransmission of the whole
`block. Schaub et al.
`[7] took up the ideas of Forney
`by declaring an erasure output on those bits to be
`decided, where the metric difference at the point of
`merging never eXCeeds a threshold.
`
`2 The Soft—Output Symbol-
`
`by—Symbol MAP and Viter-
`bi Algorithm (SOVA)
`
`2.1 Detection with Reliablity Infor-
`mation
`
`We assume that in the receiver chain a Viterbi de—
`
`tector is used. This could be a Viterbi equalizer, a
`Viterbi demodulator (i.e.
`for CPM or Trellis—Coded
`QAM), or the Viterbi decoder for an inner convolu-
`tional code. This device is followed by a second stage
`detector, which could be a demodulator or decoder
`after the equalizer, a decoder after the demodulator,
`an outer decoder after the inner decoder, or a source
`decoder. We assume that this second device can im-
`
`prove its performance if some reliability information
`is available in addition to the hard decisions from the
`
`first stage.
`A straightforward way is shown in Fig. l. The first
`stage detector delivers estimates 13’ of the symbol se-
`quence u' by processing the received sequence y in
`the Viterbi detector. We want the detector to deliver
`
`further for each symbol an estimate of the probability
`that this symbol has been incorrectly detected
`
`p5. = Probfi}. # ut 1y}-
`
`(1)
`
`1_A
`m A
`225. Wlog .p".
`k
`Pk
`
`(2)
`
`47.1.2.
`
`Since the VA of the first stage produces error events,
`and therefore correlated errors in 11;: and correlated
`values
`that might degrade the performance of
`the next stage, we apply sufficient deinterleaving to
`achieve statistical independence. A similar interleav-
`ing device is applied at the transmitter side.
`In the
`notation we drop the primes. At the dashed line A-A’
`in Fig. 1, the first stage detector delivers symbols 1],;
`with statistically independent error probabilities pk.
`Now, for the second stage detector the channel is a
`discrete (binary) memoryless compound channel
`[8]
`with output pairs (ohm). If the second stage detec-
`tor performs maximum—likelihood (ML) detection, an
`optimum ML metric is [9]
`
`

`

`Assume that the classical VA makes a final decision
`with delay 6, 6 being large enough so that all 2" sur-
`vivor paths have been merged with sufficiently high
`probability. As shown in Fig. 2, the VA has to select
`a survivor for state 3);, 0 g s), S S = 2” — 1 at time
`Is. It does so by selecting the path with the smallest
`ML metric, which for the AWGN channel, is
`
`E
`k
`N
`Mm = 2 2m, _ 23352, M: 1,2,
`j=k-6n=l
`
`(6)
`
`is the n-th bit of N bits on the branch of
`where
`the m—th path at time 3', gm is the received value at
`the same position, and E,/No is the signal—to—noise
`ratio (SNR). With this form we have
`
`Prob{path m} ~ e_M’“, m = 1, 2.
`
`(7)
`
`We label the path with the smaller metric by m = 1.
`This means M1 3 M2, which implies that the VA
`selects path 1 (neglecting ties). Then, the probability
`of selecting the wrong survivor path is
`
`1
`1
`«rm
`pl]: 2 ‘3th +6.4”3 =1+61M=_1M1 :1+eA!
`
`with A : M2 - M1 2 0. pm approaches 0.5 if M1 z
`M2 and 0 if M2 - M1 >> 1. With probability p”, the
`VA has made errors in all the e positions where the
`information bits of path 2 differ from path 1; in other
`words if
`
`Using (8), (10), and (11) we obtain after some calcu—
`lation
`
`12
`
`f(131.A)= min (132'. Ma).
`
`(14)
`
`and requires no table and no knowledge of the SNR.
`The Soft—Output Viterbi Algorithm (SOVA) can
`now be formulated using the notation in
`Only
`the steps marked by (Jr) augment the classical Viterbi
`algorithm.
`
`Storage:
`In
`(time index, modulo 6 + 1)
`13(5):) 2 {flk_6(8k), .
`.
`. , fik(8k)},
`(hard decision values, a 6 {i1})
`13(3):):{£k_§(8k),...,ik(8k)}, OS 8*, S 5—]
`(soft reliability values, 0 g 13 5 oo)
`(*)
`I‘(sk),
`0 g 3;. S S — 1, (accumulated metric values)
`
`OSSkSS—l
`
`u§"¢u§-2’.
`
`i=11.....j..
`
`(9)
`
`Recursion:
`
`A
`1+ e(afls+A)
`1
`r
`Lj 4-— f(LJ',A) = —log ——--.—
`)
`(
`1
`0‘
`eA+euLi
`with A = M2 — M1 2 0,j = j1,. ..,ja. The function
`f(lij, A) should be tabulated with ij and A as input
`variables and need not to be calculated at each step.
`The factor 0: prevents overflow with increasing SNR.
`A proper choice to achieve asymptotically E[Lj] :: 1
`Is
`E
`(13)
`a = Maui”),
`where dfne is the free distance of the code. A good
`approximation of (12) is
`
`47.1.3.
`
`3.) Classical Viterbi step:
`For each state 3k
`Compute
`P(3k—-1:5k) =
`r(s.-1)+ s; My... — ztt’r
`for both transitions (51.4, 31;).
`Find I‘(sk) = min I‘(.9k_1,sk).
`Store I‘(sk) and the corresponding
`survivor flk(sk).
`
`b) Soft—deciding update:
`For each state sk
`
`(1:)
`
`Store A = max F(sk_1,sk) — min I‘(sy¢_1,sk).
`Initialize 131.91.) = +00.
`Forjzk—utojzk—ém
`Compare the two paths merging in 5,,
`if 125.139,-
`;e 125.2%) then update
`Lj 3: “Li: A)
`
`Positions where u?) = 115-2) are not affected by the
`survivor decision. Let 6", be the length of those two
`paths until they merge. Then we have e different in—
`formation values, and 6m — e nondifferent values. As-
`sume we have stored the probabilities iii of previous
`erroneous decisions with path 1. Under the assump-
`tion that path 1 has been selected we can update these
`probabilities for the e differing decisions on this path
`according to
`
`j (—13j(1—plk)+(1_—p~j)p5kl
`
`j=jl)"'rj61 (10)
`
`0 S fij g 0.5. This formula requires statistical inde-
`pendence between the random variables 23,- and pm,
`which is approximately true for most of the practical
`codes. The recursion could be directly performed on
`the log—likelihood ratio
`
`i}- : log
`1
`
`1—".
`-p’,
`pj
`
`.
`OSLj<oo.
`
`(11)
`
`A p
`
`

`

`Technical Realization:
`
`With an 11., bit soft decision and fixed~point arith-
`metic, each survivor path of length :5 consists of n, -6
`bits. The first of 17.5 bits is the sign bit or the hard—
`decision bit. The set of the likelihood values is then
`
`ilk E {0, 1,...,2""1 — 1}. ik = 0 indicates the most
`unreliable value, and fig = 2”-‘1 — 1 the most reli—
`able value. Given the metric difference, A, quantized
`with nA bits, the likelihood update is done with the
`table—look—up. The table is calculated only once by
`using Eqn. 12 and is stored in a ROM. Thus the extra
`effort of the SOVA relative to the VA is:
`
`'0 Storage:
`
`— 2" -6 . 17., bits instead of 2" - 6 bits.
`
`— Look—up table with 2"“‘""1 vectors each
`with n, — 1 bits (unnecessary when using
`(14).
`
`o Computation Complexity:
`
`—— Maximal 2" - (6 — 11) bit comparisons.
`— 2" -e updates of 124.
`
`The SOVA can be implemented in a pipelined struc-
`ture [13] (clocked with symbol rate 1/T), see Fig. 3.
`Therefore, high speed implementation is possible.
`The Transition Metric Unit (TMU) and the Add—
`Compare—Select (ACS) unit remain unchanged. The
`novelity is a so called Path Update Unit (PAU). The
`PAU is used to “slide” over the stored (hard) informa—
`tion bits of the path RAM for the two paths diverging
`at time k and remerging at time k — 6m, as shown in
`Fig. 2. The range where the information bits 11(1)
`and u(2) can differ, is j : k — 6m to j = k — u. Ifthe
`bits differ a table—look—up is enabled to perform the
`update accordingto Eqn. 12 or 14. lithe PAU is im—
`plemented in parallel the ACS unit remains the bot—
`tleneck. Therefore soft-deciding decoding does not
`nessesarily limit the speed.
`
`2.2.3
`
`Input—Output SNR Conversion
`
`The time—discrete analog output of the SOVA at a
`given point in time is a random variable, for which
`we can define an output SNR as the ratio mz/Zaz.
`This random variable is not always exactly Gaussian.
`However, for the next stage ML algorithm that de—
`codes a code with distance d,
`the sum Zz=11ik is
`the relevant decision variable. Because of the central
`
`3 Applications
`
`This novel receiver performs better than the conven-
`tional Viterbi decoder, demodulator, or -equalizer
`whenever some form of concatenation is applied. This
`can include
`
`o modulations with memory
`e.g. Trellis—Coded Modulation (TCM), Contin-
`uous Phase Modulation (CPM) such as Tamed
`FM (TFM), Gaussian MSK, etc.
`
`channels with memory
`interfer—
`e.g.
`filter—channels with intersymbol
`ence (ISI), frequency—selective channels, or stor-
`age media with memory like magnetic recording.
`
`coding with memory
`e.g. convolutional codes
`
`and all possible combinations thereof.
`
`3.1 Multistage Coding
`3.1.1 Convolutional Inner- and Outer Codes
`
`Therefore the output SNR = m2 /202 is a useful figure
`of merit of the SOVA. This result can be interpreted
`in such a way that the input SNR (for the inner most
`section: the SNR of the channel) is converted into an
`output SNR. For closer examination we have plotted
`in Fig. 4 the simulated input/output conversion of
`the SNR for the best known R = 1/3 and R = 1/2
`convolutional codes with memory u =‘ 3. Simulation
`results with the symbol—by—symbol MAP algorithm
`show only minor differences of less than 0.3 dB.
`The SNR input/output conversion is similar to that
`of analog FM. Similar to the FM—threshold we can de-
`fine a SOVA-threshold. Enlarging the code distance
`by decreasing the code rate is analogous to an in—
`creased modulation index with FM-modulation.
`In
`both cases we exchange bandwidth with output SNR.
`The common way to interpret channel coding is as a
`means for minimizing the BER. Concatenated coding
`is then viewed as a way to clean up the residual er-
`rors [14]. The observation of the preceding paragraph
`leads to a new interpretation of channel coding as a
`means for improving the SNR, since we have analog
`input- and output discrete—time signals in the inner
`sections, and perform hard decisions possibly only in
`the outer most section.
`
`47.1.4.
`
`limit theorem this sum is almost Gaussian and gener-
`ates sequence errors with probability
`(1
`
`1
`A
`Pd 2 Prob(§ Ak < 0) = Eerfc
`k=1
`
`m2
`0'
`d~—2.
`
`(15)
`
`Multistage or concatenated coding [15] with inner-
`and outer convolutional codes seems very promising.
`
`

`

`conventional Viterbi demodulator of Coded—SPSK de-
`livers only hard decisions. With the SOVA we should
`expect a gain in the next decoding stage.
`We have investigated the performance of the 4-state
`rate 2/3 feedforward Ungerboeck code [2] with an
`8PSK signal constellation and natural binary map-
`ping. This code has a coding gain of 3 dB. The
`SOVA and the MAP work similar to Sections 2.2.1
`and 2.2.2, except that we have to modify the metric
`transition unit, and treat the coded information bit
`and the uncoded bit (parallel transition) separately.
`In Fig. 7 we have plotted the monte-carlo simula-
`tion results for the input/output SN R conversion for
`the SOVA. The plots show similar results as for the
`convolutional codes. Note that the parallel bit reaches
`the asymptotic gain of 3 dB. The asymptotic SNR
`improvement of the coded bit is about 4.7 dB. An
`outer FEC with ML—decoding would now work with
`the improved SNR.
`
`3.2 Viterbi Equalizer and Coding
`
`Equalization is a challenging problem in high—speed
`digital communications over
`time—dispersive chan-
`nels, e.g. mobile channels.
`It
`is well established
`that the Viterbi equalizer performs the desired ML
`sequence estimation [18]. However, in coded systems
`the problem is obvious since the Viterbi equalizer de-
`livers hard decisions to the outer decoder.
`
`investigate the SOVA on
`section we
`In this
`frequency—selective fading channels. For the channel
`we suppose a tapped delay-line representation with
`L + 1 independent Rayleigh tap gains. This chan-
`nel, an idealization of the mobile multipath channel,
`can be viewed as the inner code. Now, the modified
`Gaussian likelihood metric reads (compare to
`L
`
`:0
`
`(16)
`
`M. = z W) m. — 13:04:; [2.
`
`I:
`
`No
`
`where as") is the symbol of the m—th path correspond-
`ing to the trellis, f3) is the l—th tap gain, 0 g l S L,
`and yk is the received value at the same time k, 25:"),
`y, and ff) denoted in complex notation. E,(lc)/N0
`is the instantaneous SNR. As outer code we chose the
`rate 1/2 convolutional code with memory V = 4 as
`proposed by GSM
`The results in Fig. 8, simu-
`lated for L + 1 independent, Rayleigh—distributed tap
`gains, show the diversity gain for several values of L.
`However, more important in this context is the strong
`gain on the order of 4 dB at Pb = 10"3 due to the
`soft—output VA. Similar results have been derived for
`
`The inner code uses the soft—quantized received sam-
`ples, possibly supported by the channel state informa-
`tion in the case of channels with memory
`With
`the algorithms above, the outer decoder is also able
`to perform soft-decision ML decoding. First, we use
`the R = 1/2, V = 3 code with input/output SNR con-
`version presented in the last section as the inner code,
`and the punctured R = 2/3, 11 = 3 code [11] as the
`outer code. Both codes are separated by proper inter-
`leaving. The overall code rate is R = R,- ‘ R0 2 1/3.
`For the simulations we used the SOVA and the
`MAP without significant differences.
`It is advanta-
`geous to use the same VA for the 1/2 and 2/3 punc-
`tured code with the PAU turned on only for the in-
`ner decoder.
`In [16] we presented a structure which
`also shares the interleaver in an eflicient way and al-
`lows unequal error protection. The results are shown
`in Fig. 5. We have plotted the performance of the
`concatenated system derived from monte—carlo simu-
`lations. For comparison we also have plotted the per-
`formance curves for the best known R = 1/3 codes
`with memory 11 = 3, 4, and 6.
`The question of optimal rate sharing between inner-
`and outer codes has been addressed elsewhere [17].
`
`
`
`3.1.2 Convolutional Codes and Outer Parity—
`Check Codes
`
`ML decoding of block codes is not yet available in an
`elegant way. However, we give an example showing
`that with simple algorithms even parity—check codes
`gain from the soft decisions of the inner VA.
`Suppose that a message of length N bits is tailed
`by one bit resulting in a rate R9 = N/(N —|— 1) parity~
`check code. This code acts as the outer code. Suffi-
`
`‘cient interleaving and convolutional (inner) encoding
`is the second step of the encoding procedure. At the
`receiver we perform soft-decision decoding. A parity
`check is computed over the sign bits of the deinter-
`leaved samples. If the parity—check fails we toggle the
`sign of the bit with the maximum probability of error
`15;; which is the bit with the lowest magnitude ik. Fi-
`nally, we output the signs as the hard—decisions. We
`have plotted the curves for R, = 1/2 and Rl7 = 8/9
`in Fig. 6. An additional gain of 1.5 dB at 10‘6 was
`obtained.
`
`3.1.3 Coded Modulation and Outer FEC
`
`Trellis—Coded Modulation [2] has gained strong in-
`terest in recent years, because it is power and band-
`width efficient. For example, 4PSK modulation can
`be replaced by Coded—SPSK to gain 3 to 6 (13 With-
`out bandwidth expansion. The drawback is that the
`
`47.1.5.
`
`

`

`the
`trellis codes as outer codes [19]. As expected,
`worse the channel, the greater is the gain due to soft
`decisions.
`
`[13] J. Stahl, H. Meyr, M. Oerder, “Implementation of
`a High Speed Viterbi Decoder," EUSIPCO 86, cent.
`rec., pp. 1117-1120, 1986
`
`J.L. Massey, "The How and Why of Channel Cod—
`ing,” Proc. of the 1984 Zurich Seminar on Digi-
`tal Communications, IEEE Cat. No. 84 CH 1998-4,
`pp. 67-73, 1984
`
`G.D. Forney, Concatenated C
`M.I.T. Press, 1966
`
`odes,
`
`Cambridge:
`
`J. Hagenauer, “Unequal Error Protection (UEP)
`for Statistically Time—Varying Channels,” Proceed-
`ings ITG—Conference 'Stochastic Models and Meth-
`ods
`in Information Technology’, Niirnberg, W.-
`Germany,
`ITG-Fachbericht No. 107, VDE Verlag
`Berlin, pp. 253—262, Apr. 1989
`
`J. Hagenauer, P. Hoeher, "Concatenated Viterbi—
`Decoding,” Proceedings of the 4. Joint Swedish—
`Soviet Int. Workshop on Inf. Theory, Gotland, Swe-
`den, Studentlitteratur Lund, 27. Aug-1. Sep. 1989
`
`G.D. Forney, "Maximum—Likelihood Sequence Esti-
`mation of Digital Sequences in the Presence of In-
`tersyrnbol Interference,” IEEE Trans.
`Inf. Theory,
`Vol.1T-18, No. 3, pp. 363-378, May 1972
`
`second-stage
`detector
`
`Figure 1: Detector with Reliability Information
`
`k-E
`
`J.B. Anderson, T. Aulin, C.-E. Sundberg, Digital
`Phase Modulation, New York: Plenum Publishing
`Co., 1986
`
`the "Digital Cellular Ra-
`Conference record of
`dio Conference," Hagen, Fern Universitéit, West-
`Germany, Oct. 1988
`
`T. Hashimoto, ”A List—Type Reduced—Constraint
`Generalization of
`the Viterbi Algorithm,” IEEE
`Trans. on Inf. Theory, Vol. IT—33, No. 6, pp. 866-
`876, Nov. 1987
`
`H. Yamamoto, K. Itoh, ”Viterbi Decoding Algo-
`rithm for Convolutional Codes with Repeat Re-
`quest," IEEE Trans. on Inf. Theory, Vol.
`IT—ZS,
`No. 5, pp. 540-547, Sep. 1980
`
`T. Schaub, J.W. Modestino, "An Erasure Declaring
`Viterbi Decoder and its Application to Concatenated
`Coding Systems,” ICC’86, IEEE Cat. No. CH2314-
`3/86,pp.1612-1615,1986
`
`Information Theory and Reliable
`R.G. Gallager,
`Communication, New York:
`John Wiley 85 Sons,
`1968
`
`J. Hagenauer, "Viterbi Decoding of Convolutional
`Codes for Fading and Burst Channels," Proc. of
`the 1980 Zurich Seminar on Digital Comm., IEEE
`Cat. No. 800H1521-4 COM, pp. G2.1-G2.7, 1980
`
`LR. Bahl, J. Cocke, F. Jelinek, J. Raviv, "Optimal
`Decoding of Linear Codes for Minimizing Symbol Er-
`ror Rate,” IEEE Trans. on Inf. Theory, Vol. IT-ZO,
`pp.284-287, Mar. 1974
`
`Y. Yasuda, K. Kashiki, Y. Hirata, "High Rate Punc-
`tured Convolutional Codes for Soft Decision Viterbi
`Decoding," IEEE Trans. on Com., Vol. COM—32,
`pp. 315-319, Mar. 1984
`
`J. Hagenauer, ”Rate—Compatible Punctured Convo-
`lutional Codes (RCPC Codes) and their Applica—
`tions,” IEEE' Trans. on 00171., Vol. 36, No. 4, pp. 389-
`400, Apr. 1988
`
`47.1.6.
`
`References
`
`1] G.D. Forney, ”The Viterbi Algorithm," Proc. of the
`IEEE, Vol. 61, No. 3, pp. 268-278, Mar. 73
`
`"Channel Coding with Multi-
`_2] G. Ungerboeck,
`level/Phase Signals,” IEEE Trans. on Inf. Theory,
`Vol. IT-28, No. 1, pp. 55-67, Jan. 82
`
`
`
`"Detection of Uncoded and Trellis—
`P. Hoeher,
`Coded PSK—Signals on FrequencyASelective Fading
`Channels,” Proceedings ITG—Conference ’Stochastic
`Models and Methods in Information Technology',
`Niirnberg, W.-Germany,
`lTG-Fachbericht No. 107,
`VDE Verlag Berlin, pp. 225—232, Apr. 1989
`
`Figure 2: Example of the SOVA
`
`

`

`
`
`R0 = 8/9, parity-check
`
`path RAM
`(incl. PAU)
`.
`concatenated system:
`R; = 1/2, 11 = 3
`
`Ezlsinmn
`‘
`c
`I unit I A
`
`_
`
`Figure 3: Blockdiagram of the SOVA with Pipelining
`and the Path Update Unit (PAU)
`
`1.5 2.0 2.5 3,0 3.5 4.0 4.5 5.0 5.5 6.6 6.5
`Eb/NoindB
`
`Figure 6: BER for the R; = 1/2 Inner Convolu—
`tional Code with SOVA and the R9 = 8/9 Outer Par-
`ity-Check Code
`
`
`
`OutputSNRindB
`
`22
`4-state TCM (y 2 2)
`20
`18 Coded-8P5K
`16
`feedforward realization
`14
`natural binary mapping
`
`
`
`OutputS'NRindB
`
`Figure 4: SNR Input/Output Conversion for Various
`Convolutional Codes
`
`Figure 7: Input/Output SNR Conversion for 4-State
`Trellis-Coded Modulation
`
`E,/No in dB
`
`10
`
`12
`
`14
`
`16
`
`Eb/No 111 :18
`
`
`concatenated system:
`
`
`
`47.1.7.
`
`
`
`
`Channel:
`L + 1 Rayleigliitaps
`of equal av. energy
`_ Code:
`'7 16-state convolutional code
`' Equalizer:
`' soft—output VA (SOVA)
`1
`2
`3
`
`A
`
`8
`
`Eb/No in dB
`
`Eb/No in dB
`
`Figure 5: Simulated BER for Concatenated Convolu-
`tional Codes
`
`Figure '8: BER for Coded Equalization on a. Fre-
`quency~Selective Fading Channel
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket