`Coding Schemes
`for Space
`Communications
`
`Signal Processing
`Research Institute
`
`University of South Australia
`
`Jean Yves COULEAUD
`
`March–September 1995 – ENSICA Final Year Project
`ERICSSON EXHIBIT 1036
`Ericsson v. IV
`IPR2014-01149
`
`
`
`
`
`Acknowledgments
`
`I would like to thank my supervisor, Dr. Steven S. Pietrobon for his
`
`help during this project and for his patience for setting everything up. I
`
`would also like to thank the Ecole Nationale Supérieure d’Ingénieurs des
`
`Constructions Aéronautiques and its Avionics Department, and the
`
`Signal Processing Research Institute for both allowing me to undertake
`
`this research project so far from home.
`
`During my stay in Adelaide, I have had the pleasure to work with very
`
`valuable people, and I would like to particularly express my gratitude to
`
`Adrian S. Barbulescu, Gerald Bolding, Andrew Guidi, Jeff Kasparian,
`
`Marc Lavenant and Philip McIllree without whom life in SPRI would
`
`not have been the same.
`
`Finally, I would like to thank Jerome Bonhomme and Douglas
`
`Engelbart without whom all of this would have remained a dream.
`
`
`
`
`
`Abstract
`
`In 1948, Claude E. Shannon showed that forward error correction was able to
`lower the bit error ratio of a numeric transmission as long as the channel capacity is
`not exceeded. This discovery lead to amazing progress in the field of
`telecommunication and today’s compact–discs, cellular phones, submarines and
`space probes use error control schemes to make their transmissions reliable.
`The scientific community has been recently astonished by the discovery of
`parallel concatenation now referred to as Turbo Codes and its amazing performance
`and simplicity. In this report, we present another type of concatenation: serial
`concatenation and we study the properties of a new way to build iterative decoders
`based on the MAP algorithm.
`The main aim of this project is to find a way to replace the current coding
`scheme to be implemented on the NASA space observation probe Galileo. The
`spacecraft was launched in 1989 and is currently heading to Jupiter. A slightly more
`powerful error correcting scheme and its associated iterative MAP decoder are
`described. We also present in this report other powerful codes and the way they were
`derived.
`
`
`
`
`
`Table of Contents
`
`Chapter 1 Space Numeric Transmissions
`
`. . . . . . . . . . . . . . . . .
`
` 1 .1 Coding
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .1 .1 Principle of coded transmissions
` 1 .1 .2 The uniform binary source
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .1 .3 The AWGN channel
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .1 .3 .1 Definition
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .1 .3 .2 The noise
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .1 .3 .3 BPSK modulation
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .1 .3 .4 Results
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 1 .2 Decoding
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
` 1 .3 Plan
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Chapter 2 MAP Decoding Convolutional Codes
`
`. . . . . . . . . .
`
` 2 .1 Coding convolutional codes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .1 .1 Convolutional codes – definitions
` 2 .1 .2 Systematic and non–systematic convolutional codes
`. . . . . . . . . . . .
` 2 .2 Decoding Convolutional Codes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .1 The trellis structure
` 2 .2 .2 The Viterbi algorithm
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .3 Performance of the Viterbi algorithm
`. . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .3 .1 The 7/5 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .3 .2 The 37/21 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .3 .3 The 171/133 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .3 .4 Overall comparison
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .4 MAP decoding
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`1
`
`2
`
`2
`3
`3
`3
`3
`4
`4
`
`5
`
`5
`
`7
`
`7
`
`7
`8
`
`10
`
`10
`11
`12
`13
`13
`13
`14
`14
`
`
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .4 .1 The hypothesis
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .4 .2 The algorithm
` 2 .2 .4 .3 Numerical considerations
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .5 Performance of the MAP algorithm
`. . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .5 .1 The 4 state code: 5/7
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .5 .2 The 16 state code: 37/21
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .5 .3 The 64 state code: 171/133
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .5 .4 Results
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .6 Influence of the frame size
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .7 Comparison of the MAP and Viterbi algorithms
`. . . . . . . . . . . . . . .
` 2 .2 .8 Implementation problems
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .8 .1 Analysis of the MAP algorithm
`. . . . . . . . . . . . . . . . . . . . . . . .
` 2 .2 .8 .2 Improvement of the MAP algorithm
`. . . . . . . . . . . . . . . . . . . .
` 2 .2 .8 .3 Analysis of the improved MAP algorithm
`. . . . . . . . . . . . . . . .
` 2 .3 Conclusions
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Chapter 3 Iterative Decoding
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
` 3 .1 Concatenated codes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .1 .1 Serial concatenation
` 3 .1 .2 Parallel concatenation
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 Iterative decoding of serial concatenated codes
`
`. . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .1 Extrinsic information
` 3 .2 .2 Soft–In/Soft–out MAP decoder
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .3 Structure of the encoder
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .4 Structure of the decoder
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 Structure of the interleaver
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .1 Necessity of interleaving
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .2 Block interleaving
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .3 Helical interleaving
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .4 Pseudo random interleaving
`. . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .4 .1 Cyclic–T interleaver
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .4 .2 Random–T interleaver
`. . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .4 .3 Random look–up table interleaver
`. . . . . . . . . . . . . . . . . .
`
`14
`15
`18
`18
`19
`19
`19
`20
`20
`21
`23
`23
`24
`26
`
`27
`
`29
`
`29
`
`29
`30
`
`31
`
`31
`32
`33
`34
`35
`35
`36
`37
`38
`38
`42
`43
`
`
`
`. . . . . . . . . . . . . . . . . . . . . . . .
` 3 .2 .5 .5 Influence of the interleaver size
` 3 .2 .6 Numerical considerations
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 3 .3 Conclusions
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Chapter 4 The Galileo Code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
` 4 .1 The (7, 1/2) NASA standard RSC code
`
`. . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .1 .1 The non systematic (171, 133) code
` 4 .1 .2 Systematic transformation
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 The outer code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .1 Problem analysis
` 4 .2 .1 .1 The 37/21 – 171/133 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .1 .2 The 17/15 – 171/133 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .1 .3 The 5/7 – 171/133 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .1 .4 The 3/1 – 171/133 code
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .2 Modification of the iterative decoder
`. . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .3 Result interpretation
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .2 .4 Conclusions
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .3 Punctured codes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .3 .1 Definitions
` 4 .3 .2 Punctured RSC codes performance
`. . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .3 .3 Serial concatenation with punctured outer code
`. . . . . . . . . . . . . . . .
` 4 .3 .4 Conclusions
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .4 Turbo codes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .4 .1 Structure of the encoder
` 4 .4 .2 Structure of the interleaver
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .4 .3 Structure of the decoder
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .4 .4 Performance of turbo codes
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .5 Time varying schemes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .5 .1 Structure of the code
` 4 .5 .2 Choice of the component codes
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
` 4 .5 .3 Conclusions
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`45
`46
`
`47
`
`49
`
`50
`
`50
`50
`
`52
`
`52
`53
`54
`54
`55
`55
`56
`60
`
`60
`
`60
`60
`62
`64
`
`64
`
`64
`65
`65
`66
`
`69
`
`69
`70
`71
`
`
`
` 4 .6 Conclusions
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`71
`
`Chapter 5 Conclusions and Prospective
`
`. . . . . . . . . . . . . . . . . . .
`
`References
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Appendix 1 Random Numbers Generators
`
`. . . . . . . . . . . . . . . . .
`
`Appendix 2 The MAP Algorithm
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Appendix 3 Iterative MAP decoding
`
`. . . . . . . . . . . . . . . . . . . . . . . .
`
`73
`
`77
`
`79
`
`83
`
`97
`
`Appendix 4 Turbo Codes
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`131
`
`Appendix 5 Time Varying Schemes
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
`
`143
`
`Appendix 6 MAP decoding of some RSC codes
`
`. . . . . . . . . . . .
`
`169
`
`Appendix 7 Galileo’s Data
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`173
`
`
`
`List of Figures
`
`Figure 1: The Transmission Channel
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 2: Uncoded BPSK Modulation
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 3: Convolutional encoder
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 4: Divider
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 5: 1/2 Rate 16 States RSC Code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 6: Rate 1/2 4 State Convolutional Code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 7: Trellis structure for the four state encoder
`
`. . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 8: Viterbi Decoding for memory 2,4,6 RSC convolutional codes
`
`. . . . . . .
`
`Figure 9: Trellis representation of states metrics (forward)
`
`. . . . . . . . . . . . . . . . . .
`
`Figure 10: Graphical Representation of state metrics (backward)
`
`. . . . . . . . . . . . .
`
`Figure 11: Map decoding for memory 2, 4, 6 RSC convolutional codes
`
`. . . . . . . .
`
`Figure 12: Influence of the frame size on the BER
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 13: Influence of the bit position on the BER
`
`. . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 14: MAP/Viterbi decoding RSC 5/7
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 15: MAP/Viterbi decoding RSC 37/21
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 16: MAP/Viterbi decoding RSC 171/133
`
`. . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 17: f(z)=ln(1+exp(–z))
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 18: Rate 1/3 ‘Turbo Code’
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 19: Histogram of the Extrinsic Information
`
`. . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 20: Soft–In/Soft–Out decoder
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 21: Rate 1/4 Concatenation of two RSC codes
`
`. . . . . . . . . . . . . . . . . . . . . .
`
`Figure 22: Rate 1/4 Concatenation of two RSC codes with interleaver
`
`. . . . . . . . .
`
`Figure 23: Iterative decoder for serial concatenated RSC codes
`
`. . . . . . . . . . . . . .
`
`Figure 24: Performance of non–interleaved schemes
`
`. . . . . . . . . . . . . . . . . . . . . .
`
`2
`
`4
`
`7
`
`9
`
`9
`
`10
`
`10
`
`14
`
`17
`
`17
`
`20
`
`20
`
`21
`
`22
`
`22
`
`23
`
`26
`
`30
`
`31
`
`32
`
`33
`
`33
`
`34
`
`35
`
`
`
`Figure 25: Performance of a 5/7–5/7 iterative decoder with block interleaver
`
`. . .
`
`Figure 26: Spreading errors with block interleaving
`
`. . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 27: Performance of a 5/7–5/7 iterative decoder with helical interleaver
`
`. .
`
`Figure 28: Spreading error burst with helical interleaving
`
`. . . . . . . . . . . . . . . . . . .
`
`Figure 29: Performances of a 5/7–5/7 iterative decoder with Cyclic–T interleaver
`
`Figure 30: Spreading a burst of errors with Cyclic–T interleaving
`
`. . . . . . . . . . . .
`
`Figure 31: Performance of a 5/7–5/7 iterative decoder with Random–T interleaver
`
`Figure 32: Spreading error bursts with Random–T interleaver
`
`. . . . . . . . . . . . . . .
`
`Figure 33: Performance of a 5/7–5/7 iterative decoder with random interleaver
`
`. .
`
`Figure 34: Spreading error bursts with Random Interleaving
`
`. . . . . . . . . . . . . . . .
`
`Figure 35: Evolution of the BER with the interleaver size
`
`. . . . . . . . . . . . . . . . . . .
`
`Figure 36: Influence of small limiting factors
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 37: Limiting with high limiting factors
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 38: NASA standard (7, 1/2) encoder
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 39: Performance of the 171/133 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 40: Performance of the 133/171 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 41: BER 171/133 – BER 133/171
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 42: Performance of the 37/21 – 171/133 code
`
`. . . . . . . . . . . . . . . . . . . . . .
`
`Figure 43: Performance of the 17/15 – 171/133 code
`
`. . . . . . . . . . . . . . . . . . . . . .
`
`Figure 44: Performance of the 5/7 – 171/133 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 45: Performance of the 3/1 – 171/133 code
`
`. . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 46: Modified iterative MAP decoder
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 47: Performance of the modified MAP iterative decoder
`
`. . . . . . . . . . . . . .
`
`Figure 48: Interpretation – code 3/1 – 171/133
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 49: Performance of the 3/1 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 50: Performance of the 5/7 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 51: Performance of the 37/21 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`36
`
`37
`
`38
`
`38
`
`41
`
`42
`
`42
`
`43
`
`44
`
`45
`
`45
`
`46
`
`47
`
`50
`
`51
`
`51
`
`52
`
`53
`
`54
`
`54
`
`55
`
`56
`
`56
`
`57
`
`58
`
`58
`
`59
`
`
`
`Figure 52: Performance of the 171/133 code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 53: Performance of the 5/7 punctured code
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 54: Performance of the 3/1 punctured code
`
`. . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 55: Performance of the 5/7 and 7/5 punctured codes
`
`. . . . . . . . . . . . . . . . .
`
`Figure 56: Performance of 5/7 punctured – 171/133 code
`
`. . . . . . . . . . . . . . . . . . .
`
`Figure 57: Performance of the punctured 3/1 – 171/133 code
`
`. . . . . . . . . . . . . . .
`
`Figure 58: Effects of a non corrected frame for the punctured 3/1–171/133 code
`
`.
`
`Figure 59: Rate 1/2 Turbo Code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 60: Iterative turbo decoder
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 61: Performance of a 3/1 – 3/1 turbo code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 62: Performance of a 5/7 – 5/7 turbo code
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 63: Performance of a 17/15 – 17/15 turbo code
`
`. . . . . . . . . . . . . . . . . . . . .
`
`Figure 64: Performance of a 37/21 – 37/21 turbo code
`
`. . . . . . . . . . . . . . . . . . . . .
`
`Figure 65: Comparison classic RSC code/Turbo code
`
`. . . . . . . . . . . . . . . . . . . . . .
`
`Figure 66: Basic example of time varying scheme
`
`. . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 67: Performance of the first code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 68: Performance of the second code
`
`. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
`
`Figure 69: Performance of the 5/7–5/7 serial concatenated code
`
`. . . . . . . . . . . . . .
`
`Figure 70: Performance of the 3/1–171/133 serial concatenated code
`
`. . . . . . . . . .
`
`Figure 71: Performance of the 3/1–171 code with punctured outer code
`
`. . . . . . . .
`
`59
`
`61
`
`61
`
`62
`
`63
`
`63
`
`64
`
`65
`
`65
`
`66
`
`67
`
`67
`
`68
`
`68
`
`69
`
`70
`
`70
`
`73
`
`74
`
`74
`
`
`
`
`
`Chapter 1
`Space Numeric Transmissions
`
`Introduction
`
`Space communication receivers operate at very low signal to noise ratios (SNR), due to
`spacecraft limitations in antenna size and transmit power and long transmission range. It is very
`difficult and expensive to increase the SNR on such channels and one means of increasing the
`reliability of the transmitted data is to code them at the source.
`The Galileo probe, which will be orbiting Jupiter in December 1995, failed to open its High
`Gain Antenna. Due to this unfortunate incident, NASA is forced to use the Low Gain Antenna
`(LGA) which will reduce the available data rate from 134,400 bits/s to at most 160 bits/s. The
`currently planned coding scheme for the LGA is a memory 13, rate 1/4 convolutional code
`concatenated with a time varying Reed Solomon code achieving an overall rate of 223.5/255 [1].
`The planned coding scheme for the LGA will achieve a Bit Error Ratio (BER) of 10*7 at an
`EbńN0 of 0.6 dB.
`Part of the memory 13 convolutional code is formed from a NASA standard memory 6,
`rate 1/2 convolutional code. This memory 6 non–systematic encoder is directly hardwired to the
`LGA and so all communications must pass through it. This prevents the use of more powerful
`codes like the now famous ‘Turbo’ codes [2].
`The aim of this research was to find an error correction scheme which could be
`implemented on Galileo. We present a new encoder–decoder (codec) scheme, based on the same
`idea as turbo codes, which performs better than the code on Galileo, but which cannot be
`implemented on the probe due to the limitation of the hardwired LGA code. We also present
`different codes which perform as well as Galileo’s code but which are compatible with the
`hardwired LGA code.
`This chapter is an introduction to error control coding in space communications, in which
`we present our main hypothesis.
`
`1
`
`
`
` 1 .1 Coding
`
` 1 .1 .1 Principle of coded transmissions
`
`Coding data consists of adding redundant information which will help the receiver to
`rebuild the correct data. The scheme of a transmission link can be described as follows:
`
`Source
`
`Coding
`
`Modulation
`
`Demodulation
`
`ȍ+
`
`+
`
`Noise
`
`Sink
`
`Decoding
`
`Figure 1: The Transmission Channel
`
`The main hypothesis here is that the noise is additive: each received sample is the sum of
`the modulated bit (+1/–1 if BPSK modulation) and the noise. If the noise is a Gaussian variable
`with null mean the channel is said to be an additive white Gaussian noise channel (AWGN). This
`is the hypothesis we will consider in this report, which closely simulates real deep space
`communications channels.
`
`If bk is the information bit, bk Ů {0, 1}, we have for BPSK modulation:
`
`rk + (2bk–1) ) qk
`
`(1)
`
`where qk is the noise value and rk is the received analog signal.
`
`Another hypothesis is that the source (i.e., the bits bk) is a random uniform source, i.e.,
`Pr(bk + 1) + Pr(bk + 0) + 0.5. , Also we consider that all the samples we send are totally
`decorrelated.
`The coding adds redundancy to the transmitted frame. At the receiver side, an erroneous
`bit will lead to a modification of the coded sequence which can be detected. The transmitted data
`frame contains two different types of bits: the information bits which are those issued from the
`source and the parity bits which are added by the coding scheme. Note that a transmitted frame
`could scramble information bits, in the case of a non–systematic convolutional code for example.
`A coding scheme which is able to detect errors only is of interest only if the receiver can ask for
`the data to be retransmitted (ARQ = automatic repeat request). In the case of space
`
`2
`
`
`
`communications, this system is not practical due to the long transmission delays. This is why
`coding schemes must detect and correct errors (FEC = forward error correction). Note that
`Galileo is supplied with a 109 Megabyte magnetic tape which allows it to retransmit stored data
`within a period of 8 days (assuming a 160 bits/s average rate). Thus an ARQ scheme could be
`implemented on the probe, although we chose not to use it.
`
` 1 .1 .2 The uniform binary source
`
`To generate our random binary source, we used a very simple algorithm based on the
`combination of two Multiplicative Linear Congruential Generators (MLCGs) as shown in [3].
`The uniform random source generator is given in Appendix 1.
`
` 1 .1 .3 The AWGN channel
`
` 1 .1 .3 .1 Definition
`
`The AWGN channel is defined by its noise power single sided spectral density N0. Its
`distribution is then:
`
`with s2 defined by:
`
`p(y) + 1
`Ǹ
`2ps2
`
`
`
`expǒ* 12s2 y2Ǔ
`
`s2 + ǒ2R
`
`Ǔ–1
`
`Eb
`N0
`
` (2)
`
`where R is the coding rate (i.e., R=number of information bits
`number of transmitted bits) and EbńN0 the energy per bit
`to single sided noise density ratio. Eb is given by the available power at the emitting side, the
`chosen data rate and the sensitivity of the receive antenna.
`
` 1 .1 .3 .2 The noise
`
`To simulate the AWGN channel, we need to generate a Gaussian source. For that, we used
`the uniform random number generator described previously and applied a Box Muller
`transformation to obtain Normal Deviates. This transformation is described and proven in
`Appendix 2.
`
`3
`
`
`
` 1 .1 .3 .3 BPSK modulation
`
`Binary Phase Shift Keying (BPSK) modulation is a two state modulation. The number of
`bits per symbol is 1. A ‘0’ is coded by a phase of p and a ‘1’ by a phase of 0. For an optimal
`decoder, the error probability Pe for uncoded transmission (R=1) is given by:
`Ǹ ȧ
`2Eb
`Pe + Qȧ
`N0
`
`ȡȢ
`
`ȣȤ
`
`where Q(x) is the standard normal distribution:
`ŕ)R
`
`exp(– u2
`2 )du
`
`Q(x) + 1
`Ǹ
`2p
`
`x
`
` 1 .1 .3 .4 Results
`
`In order to verify that both our generators worked, we have simulated the uncoded AWGN
`channel. The results of this simulation are shown in Figure 2. Note that the simulation and theory
`curves fit very closely.
`
`Simulation
`Theory
`
`8
`
`10
`
`2
`
`4
`Eb/N0
`Figure 2: Uncoded BPSK Modulation
`
`6
`
`4
`
`1
`
`0.1
`
`0.01
`
`0.001
`
`B.E.R
`
`0.0001
`
`1e-05
`
`-2
`
`0
`
`
`
` 1 .2 Decoding
`
`To decode incoming data, the decoder has to make a decision according to the bit or the
`sequence of bits it is processing. Here, we distinguish two different types of decision: hard
`decisions and soft decisions.
`A decision is said to be a hard decision when the decoder compares the incoming value to
`a predetermined threshold. For example, in the case of a BPSK modulation, the optimal threshold
`is zero.
`
`Analog
`Encoded Data
`
`Digital
`Decoded data
`
`A decision is said to be a soft decision when the decoder takes its decision by weighting
`the information it obtained from the encoded sequence. These weights (quantified analog data)
`can be based on the probability of a received bit in an encoded sequence.
`Block–code decoders usually use hard decision, although soft decisions are more efficient.
`Soft decisions are widely used in decoding schemes like the Viterbi algorithm or the MAP
`(maximum a posteriori) algorithm which can both decode convolutional codes.
`
` 1 .3 Plan
`
`This document is the final report of an undergraduate research project performed at the
`Satellite Communications Research Centre laboratory of the University of South Australia under
`the supervision of Dr. Steven S. Pietrobon. It is divided into five chapters. The first chapter is
`the introduction.
`Chapter 2 is a presentation of convolutional codes and their decoding using the MAP
`algorithm. The derivations leading to the final MAP algorithm are presented. Its properties and
`performance are described. MAP and Viterbi decoding are also compared and MAP is shown to
`perform better than Viterbi.
`Chapter 3 is the study of the serial concatenation of two recursive systematic convolutional
`codes and its iterative MAP decoding. A large part of this chapter is dedicated to the interleaver
`effects and design. A very powerful error correcting code is presented.
`Chapter 4 describes the specific problem of the Galileo code and presents the influence of
`punctured, turbo and time varying schemes.
`
`5
`
`
`
`Chapter 5 concludes the research, highlighting its main parts and proposing prospective
`
`ideas.
`
`In the appendixes 1–5, the reader will find the C sources of the software codes which have
`been produced by this research project. A few more results on MAP decoding are presented in
`Appendix 6 and Appendix 7 briefly describes the Galileo spacecraft and its mission to Jupiter.
`
`6
`
`
`
`Chapter 2
`MAP Decoding Convolutional Codes
`
` 2 .1 Coding convolutional codes
`
` 2 .1 .1 Convolutional codes – definitions
`
`Convolutional codes (CC) are a family of error correcting codes which add redundant
`information based on the block of data they are processing and the n previous blocks of date.
`Convolutional codes contain memory. We define n as the memory length of the code (n+1 is its
`constraint length) and 2n is its number of states. Figure 3 shows a simple rate 1/2 convolutional
`code whose symbol block size is one bit.
`
`Input
`
`Output
`
`Figure 3: Convolutional encoder
`
`Each input bit is stored in a linear shift register of n bits and two output bits are computed.
`A parallel to serial transformation is then performed to transmit data. The code is said to be
`convolutional because the output of the coder is the convolutional product of the input bits and
`the coder impulse response. The coder response is defined by its binary generating polynomials
`G1 and G2.
`Let bk be the input at time k and di
`k, i Ů {0, 1} the output. Thus, we have
`
`k + ȍn–1
`di
`
`j+0
`
`Gi
`jbk–j
`
`7
`
`(3)
`
`
`
`where G1 + [1, 0, 1] and G2 + [1, 1, 1] are the binary sequences which generate the code
`depicted in the Figure 3 and the sum represents the modulo–2 summation (x + y x ę y).
`
`We can also express these relations in an algebraic way:
`
`
`
`dik + Gi @ Sk
`
`(4)
`
`where @ is the binary dot product operator and Sk the state of the encoder at time k, i.e., the value
`of the shift register. We usually express the generating polynomials in their octal form: i.e.,
`G1 + 118,G2 + 178. We will use this form in the next sections. When we refer to a 17/15 code,
`this is a code with G1=178=11112 and G2=158=11012.
`The polynomial representation is often used in coding schemes and is helpful in explaining
`some properties of the code.
`
` 2 .1 .2 Systematic and non–systematic convolutional codes
`
`The coder depicted in Figure 3 produces two coded bits. The information bit is not directly
`present in the output sequence: the code is said to be non–systematic (NSC – non–systematic
`convolutional). If the information bit is present in the encoded sequence, the code is said to be
`systematic (SC – systematic convolutional). Amongst the SC codes, the recursive SC codes form
`an interesting family due to their good performance at low signal to noise ratio [4].
`An RSC rate 1/n code can be derived from an NSC code by dividing all of the generator
`polynomials by one of the generator polynomials. Let us express the output {X,Y} of the rate half
`non–systematic encoder as a function of the delay D (DàZ*1) and its generating polynomials
`G1 and G2:
`
`X(D) + G1(D)d(D)
`Y(D) + G2(D)d(D)
`
`We want the output XȀ of the RSC code to be equal to the input d, thus:
`
`XȀ(D) + d(D) + X(D)
`G1(D)
`+ G2(D)
`YȀ(D) + Y(D)
`G1(D)
`G1(D)
`
`d(D)
`
`As 1+1=0, we have:
`
`8
`
`
`
`YȀ(D) + G2(D)d(D) ) YȀ(D)[1 ) G1(D)]
`
`That shows that we must have G1
`0 + 1 since we do not want the output Y at time k to be function
`of itself: the encoder is then causal.
`The previous division is depicted in Figure 4.
`
`Input d(D)
`
`Figure 4: Divider
`
`And the final encoder is depicted in Figure 5.
`
`Output
`
`Y(D)ńG1(D)
`
`d(D)
`
`X’(D)
`
`Y’(D)
`Figure 5: 1/2 Rate 16 States RSC Code
`
`The main advantage of RSC codes is that they give better performance at lower SNRs than
`do NSC encoders. In the next part of the report, we define G1/G2 as an RSC encoder where G1
`is the divisor.
`
`9
`
`
`
` 2 .2 Decoding convolutional codes
`
` 2 .2 .1 The trellis structure
`
`Convolutional codes are often referred to as ‘trellis codes’ due to the fact that they can
`easily be depicted by trellises. The trellis of a code lists the different states of the encoder and
`the paths they are linked to. Let us consider a four state convolutional code depicted in Figure
`6:
`
`Input bit
`
`Output bits
`
`Figure 6: Rate 1/2 4 State Convolutional Code
`
`From this representation, we can draw the trellis structure of this code:
`
`S=00
`
`00
`
`00
`
`11
`
`11
`
`01
`
`10
`
`S=01
`
`S=10
`
`S=11
`
`t=0
`
`Input : 1
`
`Input : 0
`
`00
`
`11
`
`11
`
`00
`
`01
`
`10
`
`01
`
`10
`
`00
`
`11
`
`11
`
`00
`
`01
`
`10
`
`10
`
`01
`
`t=4
`t=3
`t=2
`t=1
`Figure 7: Trellis structure for the four state encoder
`
`One way to decode a transmitted sequence of bits is to find the most probable path amongst
`all possible paths of the trellis. This is what the Viterbi algorithm does. Another way is to find
`the most probable bit, knowing the received erroneous sequence. This is performed by the MAP
`algorithm [5].
`
`10
`
`
`
` 2 .2 .2 The Viterbi algorithm
`
`As all our research is based on the MAP algorithm, we present only a brief overview of the
`Viterbi decoding scheme here. This will also underline the differences between the two decoders.
`
`The Viterbi algorithm chooses the most likely sequence to have been transmitted. The
`selected path is the one which maximises the log–likelihood of a possible transmitted sequence,
`given the received sequence. Le