`
`FOR CHANNELS WITH MEMORY
`
`by
`
`CUONG HON LAI
`
`B. A. Sc., University of British Columbia, 1989
`
`A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF
`
`THE REQUIREMENTS FOR THE DEGREE OF
`
`MASTER OF APPLIED SCIENCE
`
`in
`
`THE FACULTY OF GRADUATE STUDIES
`
`DEPARTMENT OF ELECTRICAL ENGINEERING
`
`We accept this thesis as conforming
`
`to the required standard
`
`
`
`THE UNIVERSITY OF BRITISH COLUMBLA
`
`November 1992
`
`© Cuong Hon Lai, 1992
`
`ERICS SON EXHIBIT 1015
`
`
`
`in presenting this
`
`thesis
`
`in
`
`partial
`
`fulfilment
`
`of
`
`the
`
`requirements
`
`for
`
`an
`
`advanced
`
`the Library shall make it
`i agree that
`the University of British Columbia,
`degree at
`freely available for
`reference and study.
`l
`further agree that permission for extensive
`
`this
`copying of
`department or
`publication of
`permission.
`
`thesis
`by
`his
`this
`thesis
`
`for
`or
`for
`
`scholarly purposes may be granted by the head of my
`her
`representatives.
`it
`is
`understood
`that
`copying
`or
`financial gain shall not be allowed without my written
`
`(Signature)
`
`Department of Elec‘thCeLLQ Ekat‘wczri‘mg’
`
`The University of British Columbia
`Vancouver, Canada
`
`
`
`DE-6 (2/88)
`
`
`
`Abstract
`
`Many digital communication channels are affected by errors that
`
`tend to occur in
`
`bursts. A great deal of work has been devoted to finding good burst—error-correcting codes
`
`and developing burst—error—correcting schemes. However, burst-error-correcting codes are
`
`generally useless for long bursts. Some burst-error-correcting schemes suffer long delay
`
`in decoding. Others are very sensitive to random errors in the guard space. Most of these
`
`schemes are not adaptive to channel conditions. In this thesis, two new schemes are proposed
`
`to overcome these drawbacks. The proposed schemes are analyzed over a two state Markov
`
`chain channel model. Both schemes employ a combination of two codes. In the first scheme,
`
`one of the codes is used for random error correction and for burst detection while the other
`
`one is used only for burst recovery.
`
`In the second scheme, one of the codes is used for
`
`burst detection and for channel state estimation, and both codes are used for error correction.
`
`Unlike existing burst-error-correcting schemes, it is shown that the proposed schemes are
`
`adaptive to channel conditions and less sensitive to errors in the guard space. For the same
`
`delay, the proposed schemes offer better performance than the interleaving schemes. When
`
`the channel is heavily corrupted by bursts, the improvement is even more pronounced.
`
`ii
`
`
`
`Table of Contents
`
`Abstract
`
`List of Figures
`
`Acknowledgement
`
`1.
`
`Introduction
`
`2. Preliminaries
`
`*
`
`ii
`
`vi
`
`viii
`
`1
`
`4
`
`2.1. Channel Models .................................... 4
`
`2.1.1. The Binaiy Symmetric Channel Model .................. 4
`
`2.1.2. The Gilbert-Elliott Channel Model .................... 5
`
`2.2. Convolutional Codes ................................. 7
`
`2.2.1. Encoding of Convolutional Codes .................... 7
`
`2.2.2. Decoding of Convolutional Codes ................... 11
`
`2.2.3. Punctured Convolutional Codes ..................... 17
`
`2.3.
`
`Performance of Viterbi decoding on a Gilbert—Elliott Channel ........ 18
`
`2.3.1. The Sliding Window Decoding (SWD) Bound for Decoded BER on
`
`a BSC ................................... 19
`
`2.3.2. Bound for Viterbi Decoded BER on a Gilbert—Elliott Channel
`
`.
`
`.
`
`. 22
`
`Existing Burst-Error-Correcting Schemes
`
`25
`
`3.1.
`
`Schemes with Interleaved Codes ......................... 25
`
`3.2. Gallager’s Burst-Finding Scheme ......................... 27
`
`3.2.1. Encoding Circuit ............................. 27
`
`3.2.2. Decoding Operations ........................... 27
`
`iii
`
`
`
`3.3.
`
`Schlegel and Herro’s Scheme ........................... 32
`
`3.3.1. Burst Detection Procedure (BDP) ............I ........ 32
`
`3.3.2. Decoding Operations ........................ i.
`
`.
`
`. 34
`
`3.4. Limitations of Existing Burst-Error-Correcting Schemes ........... 36
`
`4. The Proposed Adaptive Scheme 1
`
`39
`
`4.1. Encoding Circuit .................................. 39
`
`4.2. Decoding Operations ................................ 40
`
`4.3. Analysis of Decoded BER ............................. 42
`
`4.3.1. Probability of Undetected Errors .................... 43
`
`4.3.2. Probability of Decoded Errors Due to Detected Bursts ....... 45
`
`4.4.
`
`Simulation and Numerical Results ........................ 48
`
`4.4.1. Approximation and simulation results ................. 49
`
`4.4.2. Comparisons with Existing Burst—Error-Correcting Schemes
`
`.
`
`.
`
`.
`
`. 51
`
`4.5. Concluding Remarks ................................ 53
`
`5. The Proposed Adaptive Scheme 2
`
`-‘
`
`55
`
`5.1. Encoding Circuit .................................. 55
`
`5.2. Decoding Operations ................................ 56
`
`5.3. Analysis of Decoded BER ............................. 58
`
`5.3.1. Effects of Channel Parameters ...................... 59
`
`5.3.2. Effects of Channel State Estimation .................. 60
`
`5.3.3.
`
`Probability of decoded BER of Scheme 2 ............... 61
`
`iv
`
`
`
`5.4.
`
`Simulation and Numerical Results .
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`_. ............. 64
`
`5.4.1. Approximation and simulation results ................. 64
`
`5.4.2. Comparisons with Existing Burst—Error-Correcting Schemes
`
`.
`
`.65
`
`5.5. Concluding Remarks ................................
`
`6. Conclusions
`
`References
`
`80
`
`81
`
`83
`
`
`
`List of Figures
`
`2.1
`
`2.2
`
`2.3
`
`2.4
`
`2.5
`
`2.6
`
`The binary symmetric channel model ..................... _.
`
`.
`
`. .5
`
`The Gilbert—Elliott channel model ............. _ ............... 5
`
`An encoder for a (2, 1, 2) convolutional code ..................... 8
`
`Codetreeforthe(2,1.2)code.9
`
`Trellis diagram for the (2, 1, 2) code. ........................ 10
`
`Backward code tree for the (2, l, 2) code. ..................... 16
`
`2.7 ‘
`
`A simplified model of a SWD. ............................ 19
`
`2.8
`
`2.9
`
`3.1
`
`3.2
`
`3.3
`
`3.4
`
`3.5
`
`3.6
`
`4.1
`
`4.2
`
`4.3
`
`4.4
`
`4.5
`
`Decoded BER bounds for a (2, l, 4) code on a binary symmetric channel. .
`
`.
`
`. 21
`
`Performance :of a Viterbi decoder on a Gilbert-Elliott channel with m = 4,
`
`CG 2 0.9999, PBE 2 0.99, and average channel error rate = 0.005 ........ 24
`
`Interleaving procedure with the interleaving depth /\. ............... 26
`
`Encoding circuit of a (2, 1, 5) systematic code in Gallager’s burst—finding
`scheme ........................................... 28
`
`Decoding circuit of Gallager’s burst—finding scheme ................ 28
`
`Performance of Gallager’s burst~finding scheme with m = 4, P66 2 0.9999,
`PBB : 0.99, and EB : 0.5 ................................ 31
`
`Decoding circuit of a (2, 1, m) code for Schlegel and Herro’s adaptive scheme. . 34
`
`Decoded BER of Schlegel and Herro’s scheme with PGG _—: 0.9999, PBE 2 0.99,
`and 5,, = 0.5 ........................................ 37
`
`Encoding circuit of Scheme 1. ............................ 40
`
`Decoding circuit for the proposed Scheme 1 ..................... 41
`
`Diagram of a burst recovery procedure ........................ 42
`
`Performance of Scheme 1 with m] = 4, R = 1/3, PGG = 0.9995, P33 : 0.95,
`EG = 0.005, and EB = 0.5. ................... I ............ 50
`
`Comparison of decoded BER with m1 = 4, m2 = 79, R = 1/3, PGG : 0.9995,
`PBB = 0.95, and average channel error rate = 0.005 ................. 52
`
`vi
`
`
`
`5.1
`
`5.2
`
`5.3
`
`5.4
`
`5.5
`
`5.6
`
`5.7
`
`5.8
`
`5.9
`
`5.10
`
`5.11
`
`5.12
`
`5.13
`
`5.14
`
`5.15
`
`Encoding circuit of Scheme 2. ............................ 56
`
`Decoding circuit of Scheme 2. ............................ 57
`
`Channel model and channel state estimator................... -.
`
`.
`
`. 58
`
`Performance of Scheme 2 with m 2 4, R 2 1/3, Rm 2 0.9995, PBE 2 0.95,
`and average channel error rate = 0.005. ....................... 66
`
`Performance of Scheme 2 with m 2 4, R 2 1/3, PGG 2 0.9999, P,3B 2 0.99,
`and average channel error rate = 0.005. ....................... 67
`
`Performance of Scheme 2 with m 2 4, R 2 1/3, P66 2 0.9999, P13].; 2 0.99,
`as = 0.005, and SB 2 0.5. ............................... 68
`
`Comparison of decoded BER for m 2 4, R 2 1/3, PGG 2 0.9995, and
`PBB 7— 0.95. ....................................... 71
`
`Comparison of decoded BER for m 2 4, R 2 1/3, PGG 2 0.9999, and
`PBB : 0.99.
`‘ ....................................... 72
`
`Comparison of decoded BER for m 2 4, R 2 1/3, as 2 0.005, and 53 = 0.25. . 73
`
`Comparison of decoded BER for m 2 4, R 2 1/3, 56 = 0.005, and 59 = 0.5.
`
`. 74
`
`Comparison of decoded BER for m 2 4, R 2 1/3, PGG 2 0.9999, P33 2 0.99,
`and 5,3 = 0.25 ....................................... 75
`
`Comparison of decoded BER for m 2 4, R 2 1/3, P66 2 0.9999, PBE 2 0.99,
`and EB = 0.5 ........................................ 76
`
`Comparison of decoded BER for m 2 4, R 2 1/3, P66 2 0.9995, PER 2 0.95,
`and EB = 0.25 ....................................... 77
`
`Comparison of decoded BER for m 2 4, R 2 l/3, PGG 2 0.9995, PBE 2 0.95,
`and EH 2 0.5 ........................................ 78
`
`Performance of Scheme 2 for different code rates with m 2 6, FCC 2 0.9999,
`PBB 2 0.99, and 5,3 = 0.5 ................................ 79
`
`vii
`
`
`
`Acknowledgement
`
`I would like to express my sincere gratitude to Dr. Kallel for his support and guidance.
`
`This thesis could not be completed without his valuable comments and suggestions.
`
`I would also like to thank Jeffrey Chow for his help in programming, William Cheung
`
`and Victor Wong for their useful suggestions in preparing this thesis. Finally, I would like
`
`to thank my family for their continued support of my studying.
`
`viii
`
`
`
`Chapter 1
`
`Introduction
`
`In many digital communication channels, such as mobile communication channels,
`
`telephone channels, and magnetic or optical recording systems, errors tend to occur in clusters
`
`or bursts [1—3]. These channels are often referred to as channels with memory. Several
`
`schemes for burst error correction on these channels have been reported [4—12], and are called
`
`burst—error—correcting schemes. One approach is to use special codes designed exclusively for
`
`burst error correction [10]. These so—called burst—error-correcu'ng codes perform relatively
`
`well over channels with short bursts, but perform poorly when the channels are corrupted
`
`with long bursts. Another conventional approach is to interleave channel symbols prior to
`
`transmission. With interleaving, burst errors are spread over many symbols, and can thus
`
`be viewed as random errors. However, for channels with long bursts, interleaving schemes
`
`need extremely long delay to be effective, which might not be tolerated in some applications,
`
`such as voice transmissions. Another approach is Gallager’s burst-finding scheme [5].
`
`In
`
`this scheme, a rate 1/2 systematic convolutional code is used with a modified majority
`
`logic decoding. Gallager’s scheme sacrifices random—error-correcting capability in exchange
`
`for better burst correction. A modified version of this scheme was recently suggested by
`
`Schlegel and Herro [12]. This scheme is essentially the same as Gallager’s scheme except
`
`that majority logic decoding is replaced by a modified Viterbi decoding algorithm. Both
`
`Gallager’s burst-finding scheme and Schlegel and Herro’s scheme are extremely sensitive to
`
`errors in the guard space.
`
`Two efficient coding and decoding strategies are proposed in this thesis. Both schemes
`
`employ a combination of two punctured convolutional codes and a burst detection procedure.
`
`The burst detection is accomplished by observing the increment in the cumulative path
`
`
`
`Chapter 1.
`
`Introduction
`
`2
`
`metrics from Viterbi decoding. Scheme 1 uses two punctured convolutional codes with
`
`different memories.
`
`In this scheme, a code with a relatively short memory is used with
`
`Viterbi decoding for random error correction and for burst detection. The other code which
`
`has a much longer memory is used with backward sequential decoding to recover burst
`
`errors. Normally, the decoder operates in the random mode and it uses the received sequence
`
`corresponding to the code with the shorter memory. An abrupt increase in the cumulative
`
`path metrics indicates that the channel is most likely in a burst. The decoder then switches
`
`from the random mode to the burst mode, and starts burst error recovery.
`
`In the burst
`
`mode, starting from a chosen state,
`
`the decoder employs a backward sequential decoding
`
`algorithm to recover the corrupted data. When the channel becomes quiet, the path metrics
`
`are relatively constant, and the decoder returns to the random mode.
`
`Scheme 2 uses two punctured codes that are derived from the same original convolutional
`
`code with complementary perforation patterns. The second code sequence is transmitted
`
`after a delay from the transmission of the first code sequence. The first code sequence is
`
`used with a Viterbi decoder to detect bursts using the same burst detection procedure as in
`
`Scheme 1. The burst detection procedure also serves for estimating the channel state. Both
`
`received sequences are then used by a second Viterbi decoder which uses the channel state
`
`information provided by the first decoder.
`
`The proposed schemes have several advantages. Both schemes are adaptive to channel
`
`conditions. The parameters of the decoders can be chosen to optimize the performance of
`
`the schemes. For the same delay, these schemes outperform the conventional interleaving
`
`schemes when the channel is heavily corrupted by bursts. While Gallager’s burst-finding
`
`scheme and Schlegel and Herro’s scheme are sensitive to errors in the guard space,
`
`the
`
`
`
`Chapter 1.
`
`Introduction
`
`3
`
`proposed schemes can tolerate high error rates in the guard space. Different code rates
`
`can be easily implemented in these schemes. Further improvement can be obtained by
`
`introducing soft decision into the decoders.
`
`The thesis is organized as follows. Channel models are discussed in Chapter 2 and the
`
`Gilbert-Elliott channel model is described in details. Chapter 2 also gives an introduction to
`
`convolutional coding and decoding techniques. A number of existing burst-error-correcting
`
`schemes are presented in Chapter 3. These schemes include interleaving schemes, Gallager’s
`
`burst-finding scheme, and Schlegel and Herro’s adaptive scheme. Detailed descriptions
`
`of the two proposed schemes, Scheme 1 and Scheme 2, are given in Chapters 4 and 5
`
`respectively. Analysis and simulation results of these schemes over a Gilbert-Elliott channel
`
`model, and comparisons with other burst-error—correcting schemes, are also presented in
`
`these chapters. Finally, Chapter 6 summarizes the main results and important features of the
`
`proposed schemes, and suggests some topics for future work.
`
`
`
`Chapter 2 Preliminaries
`
`This chapter provides the background on channel models and convolutional codes, which
`
`is necessary for the understanding of the thesis.
`
`2.1 Channel Models
`
`Communication channels can be classified into two categories: memoryless channels
`
`and channels with memory. Space or satellite channels are a few examples of memoryless
`
`channels. On these channels, a received signal at a given time interval depends only on
`
`the transmitted signal in that time interval, but not on previous signal transmissions. As a
`
`result, errors appear randomly on memoryless channels, and therefore these channels are often
`
`referred to as random—error channels. Many real communication channels, however, have
`
`memory. Errors on these channels tend to occur in clusters or bursts. This bursty behavior
`
`is common in mobile communication channels, telephone channels, and magnetic or optical
`
`recording systems [l—3]. This section introduces two simple channel models,
`
`the binary
`
`symmetric channel (BSC) model representing a memoryless channel, and the Gilbert—Elliott
`
`channel model representing a channel with memory.
`
`2.1.1 The Binary Symmetric Channel Model
`
`The binary symmetric channel (BSC) model is a simple model for discrete memoryless
`
`chamels (DMC). A discrete memoryless channel of M~auy inputs and Q—ary outputs is defined
`
`by the transition probabilities P(jli) where 0 S i S (M — l) and 0 Sj g (Q ~ 1). In the
`
`4
`
`
`
`Chapler 2. Preliminaries
`
`5
`
`BSC model, M = 2 and Q = 2. The model is completely defined by a channel error rate or
`
`crossover probability 6 as shown in Figure 2.1.
`
`0
`
`1-8
`
`0
`
`1 — 8
`
`Figure 2.1 The binary symmetric channel model
`
`2.1.2 The GilbertaElliott Channel Model
`
`The binary symmetric channel model is inadequate in representing channels with memory
`
`where errors tend to occur in bursts. Many models have been suggested to simulate the
`
`behavior of bursts on channels with memory, and an accurate representation of a real channel
`
`requires increasingly complicated models. These channel models range from a simple two—
`
`state Gilbert channel model to a much more complex channel model with an infinite number
`
`of states [1—3]. The model used to simulate bursts in this thesis is the Gilbert-Elliott channel
`
`model [2] depicted in Figure 2.2.
`
`l - PGG
`
`PGG
`
`PBB
`
`Error rate in G
`8
`G
`
`l _ P
`BB
`
`Error rate in B
`8
`B
`
`Figure 2.2 The Gilbert-Elliott channel model.
`
`
`
`Chapter 2. Preliminaries
`
`6
`
`The Gilbert-Elliott channel model is constructed with two states and four parameters.
`
`Let the two states G and B denote the good state and bad state respectively. The channel
`stays in state G most of the time, and only makes occasional transitions to state B. Since the
`
`channel resides in state G most of the time, the probability of remaining in state G denoted
`
`as P66 is often much larger than the probability of remaining in state B denoted as PBB.
`
`The steady state probability in the good state P6, and the steady state probability in the bad
`
`state P13 are given as follow
`
`and
`
`p z
`G
`
`PCB + PBG
`
`,
`
`P013
`P = --—————-
`B
`PCB + PEG
`
`2.1
`
`(
`
`)
`
`2.2
`
`)
`
`(
`
`where PGB is the transition probability from state G to state B and PEG is the transition
`
`probability from state B to state G [3]. Errors can occur in state G with a probability of
`
`EC, and in state B with a probability 53, where 56 << 53. The average channel error rate
`
`is then given by
`
`e = Pcec + P853.
`
`(2.3)
`
`it is necessary to introduce a number of important parameters that are useful
`
`in the
`
`analysis of the channel model. Throughout the thesis, a “burst” is defined as the number
`
`of consecutive intervals in state B before a transition to state G occurs. A “guard space" is
`
`similarly defined as the number of consecutive intervals in state G. Let us denote the average
`
`burst length as AB, and the average guard space as AG.
`
`It is shown in [3] that
`
`AG
`
`
`l
`__
`‘i—PGG’
`
`(2.4)
`
`
`
`Chapter 2. Preliminaries
`
`and
`
`AB :
`
`
`
`1 - Pm;
`
`'
`
`7
`
`(2.5)
`
`Another important parameter is the high—to—low error density ratio A = 5161. This value A,
`
`together with PE, reflects the bursty nature of the channel. When the channel has dense
`
`bursts, A is high and P,3 is law. On the other hand, when the channel has diffuse bursts, A
`
`is low and PB is high. The channel becomes a random-error channel when A approaches 1.
`
`2.2 Convolutional Codes
`
`Since the introduction of convolutional codes by Elias in 1955, this class of linear codes
`
`has been used in many important applications, such as space and satellite communications
`
`[10]. These contributions are attributable to the good structure of convolutional codes and
`
`efficient decoding algorithms. This section introduces the code structure and basic decoding
`
`algorithms for convolutional codes. Punctured convolutional codes are also discussed.
`
`2.2.1 Encoding of Convolutional Codes
`
`Without loss of generality, let us limit our discussion to rate 1/n convolutional codes. A
`
`rate R = 1/rz and memory m convolutional code, denoted by (n, 1, m), can be constructed using
`
`a shift register of (m + 1) stages, n modulo-2 adders, and a multiplexer. The connections of
`
`the n adders to the shift register are specified by a generator matrix given by
`
`Gm) —~= [thDt gmw),
`
`gWD)
`
`(2.6)
`
`where the 1"“ generator polynomial
`
`MD) = so“) + Dgi‘) + mg?) +
`
`+ Dmgié)
`
`i=1,2,..t,n
`
`(2.7)
`
`
`
`Chapler 2. Preliminaries
`
`.
`
`I,
`
`8
`
`u
`
`(1)
`
`I
`go.) :1+D+D2
`
`(1)
`
`(2)
`
`Figure 2.3 An encoder for a (2, 1, 2) convolutional code.
`
`indicates the connection of the 1"“ adder to the shift register, and D represents a delay unit.
`
`An encoding circuit for a (2, l, 2) convolutional code with G(D) = [1,
`
`1 + D + D2] is
`
`shown in Figure 2.3.
`
`The encoding of a convolutional code is carried out one bit at a time. The content of
`
`the shift register is flushed out with all zeros before the encoding takes place. An input bit
`
`to be encoded is fed into the shift register from the left, and the shift register shifts to the
`
`right once. Output bits from the n adders are multiplexed before being transmitted over the
`
`channel. Let us denote an input sequence of length K by
`
`u(D) = uo + Du1+ Dzu2 +
`
`+ 131(4qu
`
`(2.8)
`
`and a composite generator polynomial for an (n, 1., m) convolutional code by
`
`g(D) = gmw) + Dg(2)(D2) + ngl3)(p3) +
`
`+ Dn—1g(">(pn).
`
`(2.9)
`
`The encoded sequence or code word v(D) corresponding to u(D) can be obtained by
`
`v(D) = u(D)g(D).
`
`(2.10)
`
`The state of a convolutional encoder is defined by the first m stages of the shift register.
`
`An (n,
`
`l, m) code has a total of 2’"
`
`P
`
`ossible states. The convolutional encoding and the
`
`
`
`Chapter 2. Preliminaries
`
`9
`
`11
`
`00
`
`10
`
`01
`
`11
`
`oo
`
`11
`
`00
`
`10
`
`01
`
`10
`
`01
`
`11
`
`oo
`
`11
`
`oo
`
`10
`
`01
`
`10
`
`01
`
`11
`00
`11
`
`00
`10
`
`01
`
`10
`
`01
`
`11
`
`oo
`
`oo
`
`01
`
`01
`
`00
`
`00
`
`01
`
`01
`oo
`oo
`
`01
`01
`
`00
`
`oo
`
`01
`
`01
`
`oo
`
`01
`
`oo
`
`01
`
`oo
`
`or
`
`oo
`
`01
`oo
`01
`
`oo
`01
`
`00
`
`01
`
`oo
`
`01
`
`00
`
`l
`T
`
`i
`0
`
`~
`
`Figure 2.4 Code tree for the (2, I, 2) code.
`
`states of the encoder are usually depicted by a tree diagram. The tree diagram for the given
`
`(2, 1, 2) code is shown in Figure 2.4. A simplified form of the tree diagram is the trellis
`
`diagram shown in Figure 2.5. The simplification is done by combining all the nodes having
`
`the same state at each level in the tree into one single node in the trellis. The encoding
`
`starts from the origin node, which corresponds to state 0. The code bits on every branch
`
`of the trellis or the tree are the output bits corresponding to each possible input bit.
`
`In the
`
`trellis and tree diagrams, upper branches correspond to “1” input bits, and lower branches
`
`correspond to “0” input bits. A tail of m zero bits are often added at the end to flush out the
`
`content of the shift register and let the encoder return to state 0.
`
`
`
`Chapler 2. Preliminaries
`
`10
`
`Convolutional codes can be classified as systematic or nonsystematic.
`
`In an (n, 1, m)
`
`systematic convolutional code,
`
`the first output sequence is exactly the same as the input
`
`sequence. The first generator polynomial is, therefore, given by gm 2: 1. Any code that
`
`does not satisfy this condition is said to be nonsystematic.
`
`The Hamming distance between two code words is defined to be the number of positions
`
`in which the two code words disagree. The minimum free distance df of a convolutional
`
`code is defined to} be the minimum Hamming distance between any two code words v and
`
`v’ corresponding to the input sequences u and u’, where u 5—! u'. The minimum distance dM
`
`of a convolutional code is the minimum—weight code word for the first (m + 1) input bits
`
`whose initial input bite is nonzero. The most important distance measure of a convolutional
`
`code is the minimum free distance df since the error-correcting capability of a convolutional
`
`code is given by tM = [(df — 1) / 2}. The free distance of a systematic code is known to be
`
`inferior to that of a nonsystematic code of the same memory [10].
`
`
`
`Figure 2.5 Trellis diagram for the (2, l, 2) code.
`
`
`
`Chapter 2. Preliminaries
`
`11
`
`2.2.2 Decoding of Convolutional Codes
`
`There exist various decoding algorithms for convolutional codes [10, 13-15]. The
`principal decoding algorithms include Vrterbi decoding, sequential decoding, and majority-
`
`logic decoding algorithms. In Viterbi decoding and sequential decoding, a decoding decision
`
`is based on the entire received sequence. On the other hand, with the majority—logic decoding,
`
`a decoding decision is based on one portion of (m + 1) received blocks at a fime. Descriptions
`
`of these algorithms are given for ease of understanding of the burst—error-correcting schemes.
`
`2.2.2.1 Majority Logic Decoding (MLD)
`
`Let us consider a (2, 1, m) convolutional code with the code generator polynomials
`
`g(‘)(D) = s30 + De?) + D2g§° +
`
`+ DmgSi?
`
`2': 1,2.
`
`(2.11)
`
`Given an input sequence u of length K, the parity matrix H is an K x 2K matrix given by
`
`38”
`(2)
`1
`
`gé’)
`
`i”
`(1)
`1
`
`(2)
`O
`
`sé"
`
`g?)
`
`<1)
`0
`
`35”
`
`g3”
`
`s3”
`
`H:
`
`(2)
`gm
`
`(1
`gm)
`
`(2
`".1;
`2
`5")
`
`(l
`gmlr
`(1
`gm)
`
`(2
`smlz
`2
`gill
`2
`gL’
`
`(l)
`gm;
`1
`gin;
`i
`En)
`
`(2)
`go
`(2)
`s1
`E
`
`2)
`
`(1)
`go
`(I
`1)
`1
`Q
`
`(1)
`2
`go
`5,)
`l
`(2
`1’ gi’
`
`(2)
`go
`
`(]
`so)
`
`Let r be a received sequence to a possible transmitted sequence v. The syndrome sequence
`
`s is defined as
`
`(2.12)
`
`s .—. rHT = (v‘ +"e)HT = eHT
`
`(2.13)
`
`
`
`Chapter 2. Preliminaries
`
`12
`
`where e is the channel error sequence corresponding to r and v.
`
`The decoding decision for the 1"" bit
`
`is based on a truncated syndrome sequence
`
`[s]m : (Si,s,-+],si+2,...,3g+m). The truncated syndrome sequence [8]," is obtained from
`
`a truncated error sequence [e]m and a truncated parity matrix [H]T,,,, and can be written as
`
`where
`
`an
`
`d
`
`T
`l51m=leimlHlm ,
`
`(2)
`(3
`__
`[e]m ._.. (ei )er
`
`2
`(
`(2
`1)
`(2
`(1)
`7 ei+lei+)17 65448.32) ..., egifimeggm)
`
`(2)
`o
`
`31>
`
`(2)
`l
`
`a”
`(2)
`o
`
`gr)
`
`(2)
`2
`
`a”
`(2)
`i
`
`g5”
`.
`
`'
`
`.
`(2)
`2
`
`g”
`.
`
`-
`
`'
`
`.
`.
`
`(2)
`m
`
`gt?
`(2)
`m—]
`
`'
`
`'
`
`.21:
`.
`.
`
`.
`
`T
`{Hlmz
`
`32’)
`gr”
`
`(2.14)
`
`(2.15)
`
`(2.16)
`
`Each syndrome bit in {5}," is a sum of error estimates over (m + 1) received blocks.
`
`A set of J check sums is derived from the truncated syndrome sequence [s]m where
`
`l S J S m. Each of the J check sums can be a single syndrome bit or a combination of
`
`the (m + l) syndrome bits. An error is said to be checked by a check sum if the check sum
`
`contains it. The main idea of the majoritydogic decoding algorithm is based on the concept
`
`of orthogonality of the parity check sums. A set of J check sums is defined to be orthogonal
`
`on an error if each of these sums checks the error, and no other error bit is checked by
`
`more than one check sum. The check sums are often constructed to be orthogonal on the
`
`errors of the code sequence coming from the first adder of the encoder. With a threshold
`
`tML : [J/Qj, an error estimate to a received bit is chosen to be 1 if and only if more than
`
`
`
`Chapter 2. Preliminaries
`
`l3
`
`tML of the J orthogonal check sums are l. The error estimate is added to the received bit
`
`to give the decoded output. After the ith bit is decoded, a new truncated syndrome sequence
`
`is computed from the next received sequence for the (i + l)th bit. The computation can be
`
`simplified by observing the relationship among a current truncated syndrome sequence, the
`
`previous sequence, and the previous error estimate. Error estimates can be fed back to the
`
`decoder as in feedback decoding, or excluded from the next syndrome computation as in
`
`definite decoding. Details of the definite and feedback decoding algorithms can be found in
`
`[10]. The syndrome computation continues until all bits are decoded.
`
`2.2.2.2 Viterbi Decoding
`
`..
`
`In the trellis representation of a convolutional code, a path is defined by the connection
`
`between the origin node and the end node via consecutive intermediate nodes. Each path
`
`in the trellis corresponds to a possible transmitted sequence v. Given a received sequence
`
`r, the Viterbi decoding algorithm searches the trellis for the path v with the maximum log—
`
`likelihood function log Pr(r|v). On a BSC, this corresponds to computing the Hamming
`
`distance between the received sequence and all possible paths in the trellis, and selecting the
`
`closest path to the received sequence.
`
`The Viterbi algorithm starts from the origin node and extends forward into the trellis.
`
`Given the first m branches of the received sequence, the decoder examines all 2’" distinct
`
`paths at level m, and computes the cumulative path metrics which are either the log-likelihood
`
`functions or the Hamming distances if the channel is a BSC. After the reception of the
`
`(m + 1)th branch, each path extends its two branches into level (m + 1). At this level, two
`
`branches converge into each node in the trellis. The metrics of the two converging paths into
`
`each node are compared, but only the path with the larger metric is retained while the other is
`
`
`
`Chapter 2. Preliminaries
`
`14
`
`discarded. The retained path is called a survivor. All 2’” survivors are obtained in the same
`
`way. In the case of a tie in the metrics, the survivor is chosen at random. From this level on,
`
`the decoding involves extending a node into the next level, computing the cumulative path
`
`metrics, comparing the path metrics in pairwise at each node, and selecting the survivors.
`
`This process continues until all received bits have been considered. At this point, the path
`
`with the largest metric among the survivors is selected to be the decoded path.
`
`In practice, since the length of the input sequence is long, the decoding decision on the
`
`1"“ bit is based on the best partial path at time (i + (IT), where d-r is called the truncation path
`
`length [10]. The truncation path length is often chosen to be about 4 or 5 times the code
`
`memory. The number of computations in Viterbi decoding is constant for all channel noise
`
`levels since the number of state operations is fixed. The complexity of Viterbi decoding
`
`increases with the code memory as a result of the increase in the number of possible states.
`
`2.2.2.3 Sequential Decoding
`
`Sequential decoding itself has many variations [10].
`
`In the thesis, discussions of the
`
`sequential decoding techniques are limited to the stack or Zigangirov-Jelinek algorithm [14].
`
`The description of the algorithm requires the help of the code tree as previously shown in
`
`Figure 2.4.
`
`The decoder consists of a stack of the already searched paths which are ordered in
`
`decreasing order of their metric values. The top of the stack is the path with the largest
`
`cumulative path metric and is the one that is extended into the tree. After each extension,
`
`the stack is reordered so that the decreasing order of the path metric values is reserved. The
`
`extended path which has an ever increasing accumulated metric will continue to be searched.
`
`As soon as its metn'c decreases, this path will be properly stored in the stack according to
`
`
`
`Chapter 2. Preliminaries
`
`15
`
`its metric value, and the new top node will be extended. The decoding process consists of
`
`finding the top node, extending it one step further into the tree, and reordering the stack.
`
`After the stack is initially loaded with the origin node as the top node with a zero path metric,
`
`the decoding algorithm can be described by the following steps:
`
`1. Compute the metrics of the two successors of the top node.
`
`2. Enter the new successors in their proper places in the stack according to their
`
`metric values.
`
`3. Delete from the stack the entry of the node that was just extended.
`
`4. Stop if the top node is the end node. Otherwise, return to step 1.
`
`When the loop ends, the decoded path can be recovered from the top node and the information
`
`stored in the stack.
`
`Other variations of sequential decoding include the Fano algorithm, the hybrid sequential
`
`and algebraic decoding schemes,
`
`the multiple stack algorithm, and the generalized stack
`
`algorithm [10], Contrary to Viterbi decoding, sequential decoding is adaptable to the noise
`
`level. The fact that sequential decoding is independent of the code memory makes it a better
`
`choice for the decoding of convolutional codes with long memory.
`
`2.2.2.4 Backward Sequential Decoding
`
`Substantial research work on bidirectional decoding of convolutional codes has been done
`
`in recent years [16, 17].
`
`In these bidirectional decoding algorithms, a code tree is searched
`
`simultaneously from the root and end nodes in the opposite directions. In normal (forward)
`
`sequential decoding, starting from the origin node, the decoder explores the tree to find the
`
`best path until the end of the tree is reached. The decoding can also be done in the opposite
`
`direction starting from the end node of the tree. The code is now reversed as suggested
`
`
`
`Chapter 2. Preliminaries
`
`16
`
`
`
`Figure 2.6 Backward code tree for the (2, l, 2) code.
`
`in the code tree of Figure 2.6.
`
`For example, an input sequence u = 1011 producing the
`
`highlighted path in Figure 2.4 now corresponds to the highlighted path in the backward code
`
`tree of Figure 2.6, generated by the reversed input sequence ur = 1101. Operations of a
`
`backward sequential decoder are the same as those of a forward sequential decoder except
`
`the decoding is applied on the backward tree.
`
`It should be pointed out that a convolutional good code f