`IEEE TRANSACTIONS ON COMMUNICATION TECHNOLOGY
`V01. COM-19, No. 5, October 1971
`Copyright © 1971, by the Institute of Electrical and Electronics Engineers, Inc.
`PRINTED IN THE U.S.A.
`
`Nonsystematic Convolutional Codes for
`Sequential Decoding in Space Applications
`
`JAMES L. MASSEY, MEMBER,
`
`IEEE, AND DANIEL J. COSTELLO, JR., MEMBER,
`
`IEEE
`
`
`
`Abstract—Previous space applications of sequential decoding have
`all employed convolutional codes of the systematic type where the
`information sequence itself is used as one of the encoded sequences.
`This paper describes a class of rate 1/2 nonsystematic convolutional
`codes with the following desirable properties: 1) an undetected
`decoding error probability verified by simulation to be much smaller
`than for the best systematic codes of the same constraint length;
`2) computation behavior with sequential decoding verified by
`simulation to be virtually identical to that of the best systematic
`codes; 3) a “quick-look-in” feature that permits recovery of the
`information sequence from the hard-decisioned received data
`
`without decoding simply by modulo—two addition of the received
`sequences; and 4) suitability for encoding by simple circuitry
`requiring less hardware than encoders for the best systematic
`codes of the same constraint length. Theoretical analyses are given
`to show 1) that with these codes the information sequence is ex-
`tracted as reliably as possible without decoding for nonsystematic
`codes and 2) that the constraints imposed to achieve the quick-
`look-in feature do not significantly limit the error-correcting ability
`of the codes in the sense that the Gilbert bound on minimum distance
`can still be attained under these constraints. These codes have been
`adopted for use in several forthcoming space missions.
`
`Paper approved by the Communication Theory Committee of
`the IEEE Communication Technology Group for publication
`without oral presentation. This work was supported by NASA
`under Grant NGL15-004—026. Manuscript received May 5, 1971.
`J. L. Massey is with the Department of Electrical Engineering,
`University of Notre Dame, Notre Dame, Ind. 46556.
`D. J. Costello, Jr.,
`is with the Department of Electrical Engi-
`neering, Illinois Institute of Technology, Chicago, 111. 60616.
`
`I.
`
`INTRODUCTION
`
`HE DEEP—SPACE channel can be accurately
`
`Tmodeled as the classical additive white Gaussian
`
`noise channel. Sequential decoding of convolutional
`codes is the best presently known technique for making
`
`ERICS SON EXHIBIT 1016
`
`
`
`MASSEY AND COSTELLO: SEQUENTIAL DECODING IN SPACE
`
`807
`
`efficient use of the deep—space channel [1]. In this paper,
`we describe a class of convolutional codes developed
`especially for use with sequential decoding on the deep-
`space channel that offers advantages in undetected error
`probability and in case of implementation over the con-
`volutional codes previously used 011 the deep—space chan—
`nel. In the remainder of this section, we present the back-
`ground material in convolutional coding needed for the
`description and analysis of these new codes.
`Let i0,
`i1,
`i2,
`be a sequence of binary information
`digits which are to be encoded into a convolutional code.
`It is convenient to represent such a sequence by its 1)
`transform
`
`is easily shown that (1”, is equal to the fewest number of
`nonzero digits in any initial code word with I}, 2 1.
`
`II.
`
`SYSTEMATIC VERscs NONsYsTEMATIC CooEs
`
`The principal advantage of systematic codes is that the
`information sequence is simply recoverable from the en—
`coded sequences by virtue of (2). Suppose that, as is the
`case in many applications, hard decisions are made at
`the receiver on each transmitted digit. The jth hard-
`dccisioncd received sequence
`
`R(1‘)(D) = 7,0(1') +T1<i>D + “(NDZ +
`
`may then be expressed as
`
`I(D) =1'0+1’1D+i2D2+
`
`R(i)(l)) = Tti)(D) + I§(i)(D)
`
`A rate R = l/N convolutional encoder of memory order
`m is simply a single-input N-output
`linear sequential
`circuit whose outputs at any time depend on the present
`input and the m previous inputs. The N output sequences
`are the encoded sequences for the input
`information
`sequence and are interleaved into a single stream for
`transmission over the noisy communications channel. Let—
`ting TW(D) be the D transform of the jth encoded se-
`quence, that is
`
`T")(D) = to”) + t1(i)D + t2mD2 +
`
`we can represent the encoder by N transfer functions
`
`G”’(D) = 90‘“ + mmD +
`
`+ g..‘”D"‘
`
`in the manner that
`
`T”)(D) = 1(D)G(“(D)
`
`(1)
`
`where all additions and multiplications in the evaluation
`of (1) are carried out in modulo-two aritlnnetic tie, in
`the finite field of two elements). In coding terminology,
`the transfer functions G”) (D) are called the “code gen-
`erating polynomials” of the convolutional code.
`The convolutional code is said to be “systematic” if
`
`or equivalently if
`
`T‘”(D) = 1(D)
`
`G‘“(D) = 1.
`
`(2)
`
`(3)
`
`(If T‘"(D) = 1(1)), for some j % 1, we exchange the
`labels on the first and jth output terminals and still con-
`sider the code to be systematic.) A systematic convolu-
`tional code is thus just simply one in which the first
`transmitted sequence is the information sequence itself.
`The span of
`(m + 1)N encoded digits generated by
`the encoder at times 0, 1,
`, m are all of the encoded
`digits affected by the first information digit
`2}, and are
`called the “initial code word” [2], and the number 123 =
`(m + 1)N is called the “constraint length” of the code.
`The “minimum distance” d,,, of the code is the fewest
`number of positions in which two initial code words aris—
`ing from different values of in are found to differ, and it
`
`where the addition is again modulo—two, and
`
`E(i)(D) : euti) + el(i)1) +62(i)D2 +
`
`is the sequence of errors in the hard decisions, that is
`em”) = 1 if and only if r,,"" 7e tut“. For many engineer—
`ing purposes such as synchronization and monitoring, it
`is desirable to get reasonably good estimates of‘ the in-
`formation digits directly from the hard-decisioned re-
`ceived sequences without employing the lengthy decoding
`process which is often carried out at a remote site much
`later in time. In systematic codes, since by (2) and (4)
`
`R‘“(1)) = 1(1)) + E‘“(D)
`
`(5)
`
`the digits in R‘“ (D) can be taken directly as the esti-
`mates of the information digits with the same reliability
`as the hard decisions themselves. In nonsystematic codes
`where (2) does not hold, this convenience is lost, a fact
`which has been a major deterrent in the use of nonsys-
`tematic codes‘
`
`Wozencraft and Reiffen [3] have shown that for any
`nonsystematic code there is a systematic code with pre-
`ciscly the same set of initial code words for z}, = 1 and
`thus with the same minimum distance (I,,,. This result
`showed that there was no advantage for nonsystematic
`codes when used with decoders such as threshold decoders
`
`[2] which make their decoding decision for 2], on the basis
`of the initial code word only. This fact seems to have
`caused the possible advantages of nonsystematic codes
`with other type decoders to be often overlooked.
`A final factor militating against
`the use of nonsys-
`tematic codes has been a psychological reluctance to en-
`trust data to any code where the data do not appear ex-
`plicitly in the encoded output based on the twin fears
`that such encoders would be more diflicult to implement,
`and that catastrophic errors might occur in the recovery
`of the data from the hard-decisiorml received sequences.
`()n the other hand, recent results [4]7[6] have con—
`firmed the inherent superiority in undetected decoding
`error probability of nonsystematic codes over systematic
`codes when used with sequential decoders [7],
`[8] and
`maximum likelihood decoders
`[9] which may examine
`received digits well beyond the initial code word before
`
`
`
`808
`
`making the decoding decision on in. The error probability
`of such decoders is most closely related to the code pa-
`rameter “free distance,” dmo, which is defined as the
`fewest number of positions in which the entire encoded
`sequences can differ for 1}, = 0 and 2'0 2 1. It is easily
`shown that dfm is equal to the fewest number of nonzero
`digits in any entire encoded output for 2'0 = 1. Costello
`[5] has shown that for a given memory order in more free
`distance can be obtained with nonsystematic codes than
`with systematic codes.
`The preceding considerations led the authors to look
`for nonsystematic convolutional codes which would ex-
`hibit good performance with sequential decoders but
`would also retain as much as possible the ease of extract-
`ing the information digits
`from the hard-decisioned
`received sequences characteristic of systematic codes.
`The remainder of this paper describes the successful out-
`come of such a search for rate R = 1/2 codes. The rate
`R = 1/2 was chosen as the rate of greatest practical inter-
`est in deep-space applications, since the resultant dou-
`bling of bandwith compared to no coding suffices to at-
`tain within 1 dB the total gain possible by coding [1]
`but does not lower the energy per transmitted bit unac-
`ceptably for the operation of the receiver tracking loops.
`The extension to lower rates of the form R = l/N should
`be evident to the reader.
`
`111. QUICK-LOOK—IN NONSYSTEMATIC Comes
`
`It has been shown [10] that in order to avoid “cata—
`strophic error propagation," i.e., to avoid a finite number
`of errors in estimating the encoded sequences from being
`converted to infinitely many errors in estimating the in-
`formation digits, the necessary and sufficient condition
`is that the encoder should possess a feedforward (FF)
`inverse. An FF inverse for an R = l/N encoder is simply
`an N—input single-output linear sequential circuit with
`polynomial transfer functions P,~(D), j = 1, 2,
`, N,
`such that
`'
`
`§Pi(D)Tm(D) = DLI(D)-
`
`(6)
`
`That is, passing the encoded sequences through the FF
`inverse, results in recovering the information sequence
`except for a possible delay of L time units. With the use
`of (l), (6) may be rewritten as
`
`Z P.(D)G”’(D) = DL.
`i=1
`
`(7)
`
`For the special case N = 2, i.e., R = 1/2, (6) and (7)
`become
`
`P1(D)T“’(D) + P2(D)T‘2’(D) = D‘I(D)
`
`(8)
`
`and
`
`P,(D)G“’(D) + P2(D)G"’(D) = DL.
`
`(9)
`
`(Note that for the special case of a systematic code (3)
`implies that (9) is satisfied by the simple choice P1 (D)
`1, 133(0) : 0, i.e., the FF inverse is entirely trivial.)
`
`IEEE TRANSACTIONS ON COMMUNICATION TECHNOLOGY, OCTOBER 1971
`
`Equation (8) suggests forming an estimate of the in-
`formation sequence I(D) by passing the hard-decisioned
`estimates R‘1’(D) and R‘2)(D) ofT‘1’(D) and T”) (D)
`through the FF inverse. The resulting esimate is given by
`
`P1(D)R”)(D) + P2(D)R‘”(D) = DL[I(D) + MM] (10)
`where
`
`A(D) = 60+61D+62D2+
`
`is the sequence of errors in the estimated information
`digits. The use of (4) and (8) in (10) then gives
`
`P1(D)E‘”(D) + P2(D)E‘2)(D) = D" A(D)
`
`(11)
`
`which is the basic equation relating the errors in the
`hard-decisioned received sequences to the errors in the
`estimated information digits. Suppose, as would be the
`case for the deep-space channel, that each hard-deci-
`sioned received digit has probability p of being in error
`independently of the other digits, i.e., en”) = 1 with prob-
`ability p independently of the value of the other error
`digits. It follows from (11) that if p << 1 then 8, = 1 with
`probability 1) times the total number of nonzero terms in
`the polynomials P1(D) and P2(D). Letting W[P,~(D)]
`denote the number of nonzero terms in P,(D), i.e., the
`“Hamming weight” of P,~(D), we may write the prob-
`ability of error 1),; in the estimated information digits for
`p << 1 as
`
`Pa = {Wan(D)l + WletDfllp.
`
`(12)
`
`We call the quantity
`
`A = WlPi(D)l + WlP2(D)]
`
`(13)
`
`appearing in (12) the “error amplification factor” since
`it relates the increased error probability at the output
`of the FF inverse to the input error probability. We note
`that A takes on its minimum possible value of 1
`for
`systematic codes where we may choose P1(D) = 1 and
`P2(D) = 0.
`
`For nonsystematic codes, the minimum possible error
`amplification factor is A = 2 and is attained for codes
`which permit P1(D) = P2(D) = 1. For such codes, we
`note from (10) that
`
`19WD) + R‘2’(D) = DL[I(D) + A(D)l
`
`(14)
`
`so that the FF inverse which forms the estimates of the
`information digits from the hard-decisioned received se-
`quences is instrumented simply by a single modulo-two
`adder which adds these sequences together. We note also
`from (9) that the choice P](D) = P2(D) = 1 is pos-
`sible if and only if
`
`(15)
`0mm) + G‘é’m) = DL
`that is if and only if the two code generating polynomials
`differ only in a single term. We call any R = 1/2 non-
`systematic convolutional code satisfying (15) a quick-
`look—in code, and note that such codes permit recovery
`of the information sequence I(D) from the hard-deci-
`
`
`
`MASSEY AND COSTELLO: SEQUENTIAL DECODING IN SPACE
`
`sioned received sequences using a single modulo-two
`adder and with the minimum error amplification factor
`of 2 for nonsystematic codes.
`Quick-look-in codes allow the information sequence to
`be recovered from the hard—decisioned received sequences
`almost as simply and as reliably as do systematic codes.
`It remains to show that there are quick—look—in codes
`which give better performance with sequential decoding
`than the best systematic codes with the same constraint
`length.
`
`IV.
`
`SEQUENTIAL DECODING CONSIDERATIONS AND
`CODE CONSTRUCTION
`
`There are two important characteristics of a con—
`volutional code when used with a sequential decoder,
`namely the undetected error probability in the decoder
`output and the distribution of computation. The latter
`arises from the fact that the amount of computation
`with sequential decoding is a random variable. The code
`parameters most closely related to computational per-
`formance are the “column distances” dk, k = 0, 1,
`,
`m where dk is defined [11] as the minimum distance of
`the code of memory order k obtained by dropping all
`terms of degree greater than 16 from the original code
`generating polynomials. For instance. d1 is the minimum
`distance of the code of memory order 1 with code gen-
`erating polynomials go“) + glmD and g0”) + glmD.
`It is readily checked that do = 2 is obtained if and only
`if go“) = go") = 1, and that d1 = 3 if and only if d“ A: 2
`and the values of g1“) and g1”) are different. Simulations
`have shown that the amount of computation is unac-
`ceptably large if (£1 < 3, essentially because the distance
`betWeen the upper and lower halves of the encoding tree
`[8]
`is not
`increasing rapidly enough to permit early
`rejection by the decoder, of an incorrect hypothesis of
`{0. These considerations suggest restricting the search for
`codes to be used with sequential decoding to those for
`which
`
`and
`
`you) + gl(l)D = 1
`
`809
`
`does not yet uniquely specify the code, since either choice
`of gk‘“ will occasionally give the same dk.
`The parameter of the code most affecting the unde-
`tected error probability of the decoder output is the free
`distance dfree. It has been empirically observed that for
`a given d,,,, a large dme is associated with a high density
`of “ones” in the code generating polynomials. This sug-
`gests that in the preceding procedure the choice gk”) : 1
`should be made whenever either choice gives the same dk.
`Note that this rule now uniquely specifies a quick-look-in
`code of memory order m which can be constructed by
`the following algorithm.
`
`Algorithm 1 :
`
`Step 1: Choose go“) = g..”’ = g,” = 1, and g1”)
`= 0. Set d1 = 3 and set k = 2.
`Step 2: Set gk‘“ 2 gkm : 0 and compute dk. If d,,.
`> dH, go to step 4.
`Step 3: Set gt“) = 9,5” = 1.
`Step 4: If k = m, stop. Otherwise increase k by 1
`and go to step 2.
`
`this algorithm are in order.
`Three comments about
`First, there is no necessity to rccompute (1,, in step 3 since
`it can be shown if setting 91,”) = gk‘” 2 0 does not cause
`(1;, to exceed dH then neither does setting gk‘“ 2 9k”) :
`1. Secondly, if dk > dkfil in step 2, then it must be that
`dk 2 dm + 1. This implies that (1,, will be given exactly
`by adding 3 (the value of (1H the first time step 2 is
`executed) to the number of zeros among g2“), 93‘“,
`-
`-
`-
`,
`91,“). These facts can be proved easily from considera-
`tion of the generator matrix [3] of the convolutional
`code. Thirdly, the only computationally involved part of
`the algorithm is the calculation of dk in step 2. The sim-
`plest way to compute d}, is by a sequential decoding type
`search as suggested by Forney [12].
`Algorithm 1 was programmed for the UNIVAC 1107
`computer in the University of Notre Dame Computing
`Center with m = 47 (code constraint length 11,; = (m +
`l)N = 96 digits). The algorithm yielded the following
`generators:
`-
`
`90(2) + 91(2)D = 1 + D.
`
`(1610)
`
`G‘“ = [533,533,676,737,3.=35,3]B
`
`(18a)
`
`From (15), we see that the only quick—look-in codes
`satisfying (16) are those for L = 1, namely those for
`which
`
`and
`
`G”) = [733,533,676,737,355,3]8
`
`(18b)
`
`G(I)(D) + G(2)(D)
`
`___
`
`(17)
`
`Simulations of R = 1/2 convolutional codes have shown
`that while d1 is the main determiner of good computa-
`tional performance, the further column distances d2, d3,
`- should grow as rapidly as possible to minimize the
`need for long searches into the encoding tree before an
`incorrect hypothesis of i0 is rejected. This suggests the
`desirablity of those quick-look-in codes satisfying (16)
`and (17) and constructed by choosing 93(1), 93‘“,
`gm“) (which by (17) coincide with g3”), gal“, -“ ,g,,,‘“)
`in order so that the choice of 9],”) maximizes dk. This
`
`,
`
`where we have adopted the convention of specifying a
`polynomial by its sequence of binary coefficients written
`in octal. For example, G“) = [53],; denotes the poly-
`nomial G‘1’(D) = 1 + D2 + D4 + D5 since 53 is octal
`for the binary sequence 101011.
`It should be clear that the codes obtained from Algo—
`rithm 1 are “nested” in the sense that the code of mem-
`ory order m’, m’ < m, can be obtained by dropping the
`terms of degree greater than m’ from the code generating
`polynomials of the latter code. Thus (18) serves to spec-
`ify all the codes with memory order m = 47 or less given
`
`
`
`IEEE TRANSACTIONS ON COMMUNICATION TECHNOLOGY, OCTOBER 1971
`TABLE I
`
`SIMULATION RESULTS FOR DECODING 1000 FRAMES OF 256 BITS
`EACH FOR THE NONSYSTEMATIC CODE OF (19)
`
`Fraction of Frames With Computation
`N or More
`
`BSC
`p = 0.057
`
`BSC
`p = 0.045
`
`Gaussian
`Eb/No = 2
`
`
`
`1 .000
`0.949
`
`0.991
`0.785
`
`0.968
`-
`0.676
`0.445
`0 .358
`0070
`~
`-
`-
`0.017
`
`0 .802
`0 .753
`-~-
`0.440
`0.358
`
`0.477
`O .382
`~--
`0 .063
`0.036
`Fraction of frames erased
`(50 000 or more computations)
`0.249
`0.008
`0.005
`
`
`Fraction of frames decoded in error
`0.000
`0.000
`0.000
`
`
`TABLE II
`
`SIMULATION RESULTS FOR DECODING 1000 FRAMES or 256 BITS
`EACH FOR THE SYSTEMATIC CODE OF (20)
`
`Fraction of Frames with Computation
`N or More
`
`
`
`A
`
`BSC
`p = 0.057
`
`BSC
`p = 0.045
`
`Gaussian
`Eb/No = 2
`
`N
`
`400
`550
`600
`850
`1000
`4000
`5000
`10000
`
`400
`550
`600
`850
`1000
`1500
`4000
`5000
`10 000
`
`1 .000
`0 .932
`-
`-
`'
`0.734
`0 .673
`0.532
`
`0.991
`0.756
`-
`-
`'
`0.403
`0.320
`0.187
`
`0.319
`0.237
`
`0 .969
`-
`-
`‘
`0.652
`0.404
`0.327
`0.188
`0.060
`
`0.019
`
`0.048
`0.031
`Fraction of frames erased
`(50 000 or more computations)
`O
`.108
`0.004
`0.004
`Fraction of frames decoded in error
`0 .087
`0.002
`0 .000
`
`
`sults of this simulation are given in Tables I and II.
`These results show little difference in the distribution of
`
`computation between the nonsystemat-ic and the system-
`atic code but show a dramatic difference in undetected
`error probability in the decoder output. In fact, no de-
`coding errors whatsoever were committed with the non-
`systematic code. For the noisier
`(p = 0.057) binary
`symmetric channel,
`it appears at first glance that the
`systematic code gives a reduced probability of large com-
`putational loads so that only 10 percent of the frames
`were erased compared to 25 percent for the nonsystem-
`atic code. It must be noted, however, that the undetected
`error probability was 10 percent for the systematic code.
`Thus, 15 percent of the frames are erased with the non-
`systematic code but not with the systematic code, but
`the latter code gives incorrect decoding two-thirds of the
`time for these frames. In other words, the apparent im-
`provement in computation is the result of decoding er—
`
`810
`
`by Algorithm 1. For instance, the code with memory or—
`der m = 35 has the code generating polynomials
`
`G”) = [533,533,676,737]g
`
`(19a)
`
`and
`
`0‘” = [733,533,676,737]8.
`
`(19b)
`
`The code with m = 31 has been selected by the Na-
`tional Aeronautics and Space Administration (NASA)
`for use in the Pioneer F/G Jupiter fly-by mission. Since
`there are 8 zeros among 92‘“, 93”),
`7 932‘“,
`it fol-
`lows that this code has minimum distance dag = 3 + 8
`= 11. Using the Jet Propulsion Laboratory (JPL) hard—
`ware sequential decoder to search for a minimum weight
`encoded sequence, Layland [13] verified that this code
`had free distance dm... 2 23. This large value of free dis—
`tance ensures extremely low undetected error probability
`when the code is sequentially decoded. This same code
`has also been selected by the German Institute for Space
`Research (GFW) for use in its HELIOS probe. The code
`with m = 23 is being used by NASA in the study phase
`of the Planetary Explorer program.
`
`V. PERFORMANCE WITH SEQUENTIAL DECODING
`
`To verify the effectiveness with sequential decoding of
`the quick-look-in codes of Algorithm 1,
`the code with
`memory order m = 35 was tested on several simulated
`channels together with the best known m = 35, R = 1/2,
`systematic code, namely the adjoint
`[14] of Forney’s
`extension [12] of one of Bussgang’s optimal codes. This
`systematic code has the code generating polynomials
`
`and
`
`a“) = [400,000,00030018
`
`(20a)
`
`0‘” = [715,473,70131718.
`
`(20b)
`
`The two codes were tested on both the binary symmetric
`channel and the additive white Gaussian noise (deep-
`space) channel as simulated 0n the UNIVAC 1107 com-
`puter which was also used to do the sequential decoding.
`Data were encoded into frames of 256 information bits
`followed by 35 zero bits to truncate the memory of the
`code. For the Gaussian channel, the output digits wore
`quantized to 8 levels (3 bit quantization) in the manner
`suggested by Jacobs [1]. The Fano sequential decoding
`algorithm [7] was used for decoding and up to 50 000
`computations were allowed to decode each frame. Frames
`requiring more than 50 000 computations were considered
`“erased” by the decoder. A computation was defined to
`be any “forward look” in the Fano algorithm. One thou-
`sand frames were decoded on each channel that was sim-
`ulated.
`
`Binary symmetric channels (BSC) with channel error
`probability p of 0.045 and 0.057 were simulated. For these
`values of p, the code rate
`. = 1/2 is equal to Rm”, and
`1.1 x Ramp, respectively, where Rm...” is the computa—
`tional cutoff rate of the sequential decoder [8]. The re—
`
`
`
`MASSEY AND COSTELLO: SEQUENTIAL DECODING IN SPACE
`
`811
`
`roneously and is a “fools rush in where angels fear to
`tread” phenomenon.
`Tables I and II also give the simulation results for the
`Gaussian channel with Eb/NO = 2 (3 dB) where E, is
`the transmitted energy per information bit and N0 is the
`one-sided noise power spectral density. No decoding er-
`rors were observed for either code, and the distributions
`of computation are virtually identical. The code rate
`R = 1/2 is equal
`to R0,,mp for the quantized channel.
`Comparison to the binary symmetric channel with R =
`Rcomp (p = 0.045) shows essentially identical perform-
`ance on both channels.
`The conclusions of these simulations are that the non—
`
`systematic quick-look—in code gives comparable compu-
`tational performance but gives much lower undetected
`error probability than the best systematic code of the
`same memory order. In the next section, we show the
`rather surprising fact that the encoder for the nonsys-
`tematic code is also easier to implement.
`
`VI. ENCODER IMPLEMENTATION
`
`The obvious realization for an R = 1/2 convolutional
`encoder is shown in Fig. 1 and is seen to require
`
`WlG‘”(D)l + WlG‘2’(D)l - 2
`
`two-input modulo-two adders as well as the necessary
`delay cells for storing the past m information bits. This
`realization of the encoder for the systematic code of (20)
`requires 21 modulo-two adders and is the customary en-
`coding circuit for this code. For the quick-look—in non-
`systematic code of (19), this realization of the encoder
`would require 53 modulo-two adders, this large number
`resulting from the fact that about three-fourths of the
`coefficients in the generators are ones.
`A simple “trick” can be used to reduce the number of
`modulo-two adders need to implement a generator whose
`density of ones exceeds 50 percent, namely implement
`the binary complemenet of the desired generator together
`with a circuit that does the necessary complementation
`of the output. Letting
`
`G(D) =go+axD+
`
`+g,.D"
`
`where
`
`we may write
`
`17; = g.- + 1
`
`G(D) =G(D) +1+D+
`
`+1)“
`
`or
`
`G(D) = G(D) + (1 + D'"*‘)/(1 + D).
`
`(21)
`
`It is readily verified that the circuit in Fig. 2 realizes the
`transfer function G(D) in the form (21) and requires
`
`W[(7(D)] + 2 = m + 3 - W[G(D)l
`
`(22)
`
`
`
`3-: (WA/£7770”
`UNIr RELAY -( : *- 500 N0 CONNECTIM
`>B‘ mum-m ADDER
`Obvious realization of encoder for R = 1/2 convolutional
`code.
`
`Fig. 1.
`
`
`
`Fig. 2. Alternate realization of transfer function G(D) of form
`suggested by (21).
`
`By virtue of
`(19) that
`
`(1) and (17), we have for the code of
`
`T“’(D) = 01(1)) + T‘”(D).
`
`(23)
`
`Hence if the circuit of Fig. 2 is used to realize Gm (D)
`so that its input is I(D) and its output T‘2’(D), then
`T‘1’(D) can be formed with one further modulo-two
`adder which adds the output of the first delay cell in the
`circuit of
`2, namely DI (D), to the circuit output
`T(”(D). Hence the complete encoder
`for
`the quick-
`look-in nonsystematic code of (19) can be realized with
`only 11 two-input modulo-two adders.
`If the circuit of Fig. 2 is used to realize G”) (D) for
`the systematic code of
`(20), the complete encoder can
`be realized with only 16 modulo-two adders. This is a
`substantial reduction in the 21 adders needed for the
`
`I
`
`encoder of Fig. 1, but is still surprisingly more than the
`11 adders required in the encoder for the nonsystematic
`code. The obvious conclusion is that nonsystematic codes
`do not necessarily entail more complicated encoders than
`systematic codes of the same memory order.
`
`VI. THEORETICAL CONSIDERATION: GILBERT BOUND
`
`two-input modulo-two adders. For example, this circuit
`realizes the transfer function G‘2’(D) of
`(19b) with
`only 10 moduloLtwo adders.
`
`Although the simulations reported previously confirm
`the practical value of quick—look-in nonsystematic codes,
`it may be reassuring to note that the constraints (16)
`
`
`
`812
`
`and (17), which define the quick—look-in codes suitable
`for sequential decoding, are compatible with the existence
`of codes with large minimum distance. We shall demon-
`strate this fact by showing that the class of codes satis—
`fying (16) and (17) meet the same asymptotic Gilbert
`bound as the general class of rate If 2 1/2 convolutional
`codes. We follow closely the derivation of the Gilbert
`bound given in [2].
`For any I) transform
`
`n
`
`P(D) = 7)., —t— 7),]? + sz +
`
`we shall hereafter write {Pt1))} to denote the polynomial
`of degree In or less obtained by dropping all
`terms of
`degree greater than in from PtDI. With this convention,
`{T‘lttDl} and {Tmtht} are just the initial code word
`of a rate R = 1/2 convolutional code. From (23!
`it fol—
`lows that
`
`tTmtDH + tTWDH = t1)I(D)}
`
`(34‘
`
`for quick-look-in codes, and hence that specification of
`an initial code word also specifies all digits in {ItDt}
`except 2',,,. Thus only two choices of {1111)} are possible
`for a given initial code word. Next,
`it
`is easily shown
`[2, p. 14]
`that the specification of {T‘i‘(D),l and an
`{I([))} with 11.. = 1 uniquely determines 0‘1" (1)). Hence
`only two quick—look—in codes can have a given initial
`code word in common and produced by information se-
`quences with I}. = 1.
`for all
`Let
`(I be the greatest minimum distance d,,,
`codes of memory order in satisfying (16) and (17). Since
`an initial code word contains 12,, positions, there are at
`most
`
`N =
`
`«I
`
`n
`
`<
`
`(25)
`
`different initial code words of Hamming weight (I or less.
`But no code has distance greater than (1, and hence every
`code must have at least one initial code word of weight
`(I or leSs resulting, from some {It/N} with I}, = 1. Noting
`that there are exactly 2"H codes which satisfy (16> and
`(17) since gg‘“, 93H),
`, gmm may be selected arbi-
`trarily, it must be true that
`
`2N ; 2"”1
`
`(26)
`
`since each initial code word appears in only two codes
`for {I(D)} with i... = 1. Using the well-known [3]
`in-
`equality
`
`7:, ll 4’ H_
`N S 2 ‘ H ‘)
`
`I'.
`(1/71.,1 g 12
`
`for
`
`A F"
`(27)
`
`is the
`.rt
`(1 —-
`log:
`where HQ) 2 ~x logga' — (1 — 5r)
`binary entropy function, the inequality (26) becomes
`
`Hot/32,1) ; (m —- 21%,.
`
`(28
`
`Noting that 72._., : 2(n2 + 1) for R = 1/2, it follows from
`('28) that
`
`n_4 am
`lim H(o’/n,,) ; 1/2 = 1 — It)
`
`(29)
`
`mm: TRANSACTIONS ON COMMUNICATION TECHNOLOGY. OCTOBER 1971
`
`which is the usual asymptotic Gilbert bound on minimum
`distance for general convolutional codes of rate R = 1/2
`[2].
`We conclude that the class of quick-look-in codes sat—
`isfying (16) and (17) are “good” in the sense that, there
`exist codes of this type with minimum distance at least
`as great
`that guaranteed by the asymptotic Gilbert
`bound.
`
`V111. CONCLUSIONS
`
`In this paper, it has been shown that the desirability
`of quick and reliable extraction of the information digits
`from the hard-deeisioned received sequences of a con-
`volutional code led naturally to the formulation of quick-
`look-in nonsystcmatic codes. An algorithm was given for
`the generation of a class of If : 1/2 quick-look-in codes
`designed especially for use with sequential decoding. It
`was noted that codes of this class have been chosen for
`
`several deep—space missions. The quality of these codes
`relative to the best systematic codes was verified by a
`simulation which showed comparable computational per-
`formance with sequential decoding but a much lower
`undetected error probability for the nonsystematic codes.
`It was shown further that the nonsystematic codes had
`simplicr encoder 1“alizations than the systematic codes.
`Finally, a proof was given that the quick-look-in con-
`straint is compatible with the existence of codes whose
`minimum distance satisfies the asymptotic Gilbert bound.
`Although not, explicitly mentioned before, it should be
`noted that the small error amplification factor of quick-
`IOUli-lll eodes is also advantageous in that it results in a
`small number of actual decoding errors during the “error
`events" when the sequential decoder has given incorrect
`estimates of the received sequence. For a full discussion
`of such error events, the reader is referred to [15, ap—
`pendix Il
`
`ACKNOWLEDGMENT
`
`Th: authors wish to acknowledge that the idea for
`quick—look—in codes occurred independently to Dr. G.
`David Forney, Jr, of the Codex Corporation, Newton,
`Mass, although Dr. Forney formulated no specific codes
`of this type. The contributions of K. Vairavan, J. Bren-
`nan, J. Wruck, and J. Geist to the computer programs
`used in the simulations reported herein is gratefully ac-
`knowledged.
`
`REFERENCES
`
`[1]
`
`I. M. Jacobs, “Sequential decoding for efficient, communica-
`tion from deep space,” IEEE Trans. Commun. Techno]. vol.
`COM-15, Aug. 1967, pp. 492—501.
`I21 J.
`1.. Massey, Threshold Decoding. Cambridge, Mass:
`M. I. T. Press. 1963.
`t3l J. M. Wozencraft and B. Reiffen. Sequential Decoding.
`Cambridge. Mass: M. 1. T. Press. 1961.
`1C. A. Buchcr and J. A. Heller. “Error probability bounds for
`systematic eonvolutional codes." IEEE Trans. Inform. The—
`ory, vol. IT-16, Mar. 1970. pp. 219—224.
`15] D. J. Costello, Jr. “A strong lower bound on free distance
`for periodic eonvolutional codes." presented at the IEEE Int.
`Symp,
`Information Theory. Noordwijk.
`the Netherlands.
`June 1970.
`
`I4l
`
`
`
`IEEE TRANSACTIONS ON COMMUNICATION TECHNOLOGY, VOL. COM-19, NO. 5, OCTOBER 1971
`
`813
`
`[6] F. Jelinek and L. R. Bahl, “Maximum likelihood and se-
`quential decoding of short constraint
`length convolutional
`codes,” in Proc. 7th Annu. :Allerton Conf., Oct. 1969, pp. 130—
`139.
`[7] R. M. Fano, “A heuristic discussion of probabilistic decod—
`ing,” IEEE Trans. Inform. Theory, vol. IT-9, Apr. 1963, pp.
`64—74.
`[8] J. M. Wozencraft and I. M. Jacobs, Principles of Communi-
`cation Engineering. New York: Wiley, 1965, pp. 425—476.
`[9] A. J. Viterbi, “Error bounds for convolutional codes and an
`asymptotically optimum decoding algorithm,” IEEE Trans.
`Inform. Theory, vol. IT—13, Apr. 1967, pp. 260—269.
`[10] J. L. Massey and M. K. Sain, “Inverses Of linear sequential
`
`circuits,” IEEE Trans. Comput., vol. C-17, Apr. 1968, pp.
`330‘337.
`[11] D. J. Costello, Jr., “Construction of convolutional codes for
`sequential decoding,” Ph.D. dissertation, Dep. Elec. Eng,
`Univ. Notre Dame, Notre Dame, Ind., Aug. 1969.
`[12] G. D.