`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 1 of 13
`
`6:20-cv-1042
`
`6:20-cv-1042
`
`EXHIBIT C
`
`EXHIBIT C
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 2 of 13
`Case 6120'CV'01042'ADA D"c“ml‘lllllllllll||fllllllllll'llllfilllfilllIllllllll’llllllilll'llll||||||||
`
`USOO7916781B2
`
`
`
`
`
`
`
`
`US 7,916,781 B2
`(10) Patent No.:
`(12) Unlted States Patent
`
`
`
`
`
`
`
`
`Jin et al.
`(45) Date of Patent:
`Mar. 29, 2011
`
`
`
`
`
`(56)
`
`
`
`
`References Cited
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`U.S. PATENT DOCUMENTS
`
`
`
`...................... 714/755
`5181207 A *
`1/1993 Ch
`
`
`
`
`
`
`
`
`
`5:392:299 A
`2/1995 Rhifigé: 31.
`
`
`
`
`5,530,707 A
`6/1996 Lin
`
`
`
`5,751,739 A
`5/1998 Seshadri et a1.
`
`
`
`5,802,115 A
`9/1998 Meyer
`
`
`
`
`335121!,2? :
`igggg gang et 3L
`ang
`5
`a
`
`
`
`6,023,783 A
`2/2000 Divsalar et al.
`
`
`
`2:83:33 2
`$3888 gfiilsmakesm et 31'
`
`
`
`6,044,116 A
`3/2000 Wang
`
`
`
`
`6,094,739 A
`7/2000 Miller et al.
`
`
`
`
`6,195,396 B1 *
`2/2001 Fang et al.
`.................... 375/261
`
`
`
`
`
`6,396,423 B1
`5/2002 Laumen et 31.
`6,437,714 B1
`8/2002 Kim et 31.
`
`
`
`
`
`
`6,732,328 B1 *
`5/2004 McEwen et al.
`.............. 714/795
`
`
`
`
`
`
`
`
`
`6,859,906 B2
`2/2005 Hammonsetal.
`
`
`
`
`
`
`7,089,477 B1
`8/2006 .Dlvsalar et al.
`
`(Continued)
`OTHER PUBLICATIONS
`
`
`tSft-O tAPPMdlf
`B dtt,S.,tl.,“ASft-I
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`utpu
`0 u e or
`e a
`ene e 0
`npu
`0
`0
`
`
`
`
`
`
`Iterative Decoding ofConcatenated Codes, IEEE Commumcatzons
`
`
`
`
`Letters’ 1(1):22-24, Jan' 1997'
`(Continued)
`
`
`Primary Examiner i Dac V Ha
`
`
`
`
`
`
`(74) Attorney, Agent, or Firm 7 Perkins Coie LLP
`
`
`
`
`(54) SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`
`
`FORMING TURBO-LIKE CODES
`
`
`
`
`
`
`(75)
`
`
`
`
`
`
`
`
`
`
`Inventors: Hui Jin, Glen Gardner, NJ (US); Aamod
`
`
`
`Khandekar, Pasadena, CA (US);
`
`
`
`
`Robert J. McEliece, Pasadena, CA (US)
`
`
`
`
`
`(73) Assignee: California Institute of Technology,
`
`
`Pasadena’ CA (Us)
`
`
`
`
`
`
`Subject. to any disclaimer, the term ofthis
`
`
`
`
`patent is extended or adjusted under 35
`
`
`
`
`U.S.C. 154(b) by 424 days.
`
`
`( * ) Notice:
`
`
`
`
`
`
`
`
`(21) App], No.: 12/165,606
`.
`~
`
`
`Ffled‘
`
`
`
`
`(22)
`(65)
`
`
`
`
`Jun' 30’ 2008
`
`
`Prior Publication Data
`
`
`
`
`US 2008/0294964 A1
`NOV. 27, 2008
`
`
`
`
`
`
`
`
`
`Related US. Application Data
`.
`.
`.
`.
`
`
`
`
`
`
`(63) Cont1nuation of applicatlon No. 11/542,950, filed on
`
`
`
`
`
`
`
`Oct. 3, 2006, now Pat. No. 7,421,032, which is a
`
`
`
`
`
`continuation of application No. 09/861,102, filed on
`
`
`
`
`
`
`
`May 18, 2001, now Pat. No. 7,116,710, which is a
`
`
`
`
`
`continuation-in-part of application No. 09/922,852,
`
`
`
`
`
`
`
`filed on Aug. 18, 2000, now Pat. No. 7,089,477.
`
`
`
`
`
`
`(60) Provisional application No. 60/205,095, filed on May
`
`
`18, 2000.
`
`Int Cl
`
`
`
`(2006.01)
`H04B 1/66
`
`
`
`
`
`
`
`,,,,,,,, 375/240; 375/285; 375/296; 714/801;
`(52) US, Cl,
`714/804
`
`
`
`
`
`(58) Field of Classification Search .................. 375/240
`
`
`
`
`
`
`375/240-24, 254, 285, 295, 296, 260; 714/755,
`
`
`
`
`714/758, 800, 801, 804, 805
`
`
`
`
`
`
`
`See application file for complete search history.
`
`
`
`
`
`
`
`
`
`
`
`(51)
`
`
`200\
`
`
`
`
`
`
`
`
`
`
`
`ABSTRACT
`(57)
`
`
`
`
`
`
`
`
`A serial concatenated coder includes an outer coder and an
`
`
`
`
`
`
`
`
`
`inner coder. The outer coder irregularly repeats bits in a data
`
`
`
`
`
`
`
`
`block according to a degree profile and scrambles the
`
`
`
`
`
`
`
`
`
`repeated bits The scrambled and repeated bits are input to an
`
`
`
`
`
`
`
`
`inner coder, which has a rate substantially close to one.
`
`
`
`
`22 Claims, 5 Drawing Sheets
`
`
`
`k 2
`
`02
`
`
`
`204
`
`
`
`206
`
`
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 3 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 3 of 13
`
`
`
`US 7,916,781 B2
`Page 2
`
`
`U.S. PATENT DOCUMENTS
`
`
`9/2001 Eidson et a1.
`2001/0025358 A1
`
`
`
`
`
`
`OTHER PUBLICATIONS
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Benedetto, S., et al., “A Soft-Input Soft-Output Maximum A Poste-
`
`
`
`
`
`
`
`
`riori (MAP) Module to Decode Parallel and Serial Concatenated
`
`
`
`
`
`
`
`Codes,” The Telecommunications and Data Acquisition Progress
`
`
`
`
`
`
`
`Report (TDA PR 42—127), pp. 1-20, Nov. 1996.
`
`
`
`
`
`
`Benedetto, S., et al., “Bandwidth efficient parallel concatenated cod-
`
`
`
`
`
`
`
`
`ing schemes,” Electronics Letters, 31(24):2067-2069, Nov. 1995.
`
`
`
`
`
`
`Benedetto, S., et a1., “Design of Serially Concatenated Interleaved
`
`
`
`
`
`
`
`Codes,” ICC 97, vol. 2, pp. 710-714, Jun. 1997.
`Benedetto, S., et a1., “Parallel Concatenated Trellis Coded Modula-
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`tion,” ICC 96, vol. 2, pp. 974-978, Jun. 1996.
`Benedetto, S., et a1., “Serial Concatenated Trellis Coded Modulation
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`with Iterative Decoding,” Proceedings 1997 IEEE International
`
`
`
`
`
`
`
`Symposium on Information Theory (ISIT), Ulm, Germany, p. 8, Jun.
`29-Ju1. 4, 1997.
`
`
`Benedetto, S., et al., “Serial Concatenation of Interleaved Codes:
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Performace Analysis, Design, and Iterative Decoding,” The Telecom—
`
`
`
`
`
`
`
`
`munications andDataAcquisition Progress Report (TDA PR 42126),
`
`
`
`
`pp. 1-26, Aug. 1996.
`Benedetto, S., et a1., “Serial concatenation of interleaved codes:
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`performance analysis, design, and iterative decoding,” Proceedings
`
`
`
`
`
`
`
`1997 IEEE International Symposium on Information Theory (ISIT),
`
`
`
`
`
`
`Ulm, Germany, p. 106, Jun. 29-Ju1. 4, 1997.
`
`
`
`
`
`
`
`Benedetto, S., et al., “Soft-Output Decoding Algorithms in Iterative
`
`
`
`
`
`
`
`Decoding of Turbo Codes,” The Telecommunications and Data
`
`
`
`
`
`
`
`
`
`Acquisition Progress Report (TDA PR 42—124), pp. 63-87, Feb. 1996.
`
`
`
`
`
`
`
`
`
`Berrou, C., et a1., “Near Shannon Limit Error%orrecting Coding
`
`
`
`
`
`
`
`
`
`
`and Decoding: Turbo Codes,” ICC 93, vol. 2, pp. 1064-1070, May
`1993.
`
`
`
`
`
`
`
`
`
`
`Digital Video Broadcasting (DVB)7User guidelines for the second
`
`
`
`
`
`
`
`generation system for Broadcasting, Interactive Services, News
`
`
`
`
`
`
`
`Gathering and other broadband satellite applications (DVB-S2),
`
`
`
`
`
`
`
`
`
`
`ETSI TR 102 376 V1.1.1 Technical Report, pp. 1-104 (p. 64), Feb.
`2005.
`
`
`
`
`
`
`
`
`
`Divsalar, D., et al., “Coding Theorems for ‘Turbo-Like’ Codes,”
`
`
`
`
`
`
`
`Proceedings ofthe 3 6th AnnualAllerton Conference on Communica—
`
`
`
`
`
`
`
`
`
`tion, Control, and Computing, Monticello, Illinois, pp. 201-210, Sep.
`1998.
`
`
`Divsalar, D., et a1., “Effective free distance ofturbo codes,” Electron—
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`ics Letters, 32(5):445-446, Feb. 1996.
`
`
`
`
`
`
`
`
`Divsalar, D., et al ., “Hybrid Concatenated Codes and Iterative Decod-
`
`
`
`
`
`
`
`ing,” Proceedings I997IEEE International Symposium on Informa—
`
`
`
`
`
`
`
`
`
`tion Theory (ISIT), Ulm, Germany, p. 10, Jun. 29-Jul. 4, 1997.
`
`
`
`
`
`
`
`
`Divsalar, D., et a1., “Low-Rate Turbo Codes for Deep-Space Com-
`
`
`
`
`
`
`munications,” Proceedings 1995 IEEE International Symposium on
`
`
`
`
`
`
`
`
`
`Information Theory (ISIT), Whistler, BC, Canada, p. 35, Sep. 1995.
`
`
`
`
`
`
`
`
`Divsalar, D., et a1., “Multiple Turbo Codes for Deep-Space Commu-
`
`
`
`
`
`
`
`nications,” The Telecommunications and Data Acquisition Progress
`
`
`
`
`
`
`
`Report (TDA PR 42—121), pp. 66-77, May 1995.
`
`
`
`
`
`
`
`
`
`Divsalar, D., et a1., “Multiple Turbo Codes,”MILCOM ’95, vol. 1,pp.
`279-285, Nov. 1995.
`
`
`
`
`
`
`
`
`
`
`
`
`Divsalar, D., et al., “On the Design of Turbo Codes,” The Telecom—
`
`
`
`
`
`
`
`
`munications and Data Acquisition Progress Report
`(TDA PR
`
`
`
`
`
`42—123), pp. 99-121, Nov. 1995.
`Divsalar, D., et a1., “Serial Turbo Trellis Coded Modulation with
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Rate-1 Inner Code,” Proceedings 2000 IEEE International Sympo—
`
`
`
`
`
`
`
`
`sium on Information Theory (ISIT), Sorrento, Italy, pp. 194, Jun.
`2000.
`
`
`
`
`
`
`
`
`
`Divsalar, D., et a1., “Turbo Codes for PCS Applications,” IEEE ICC
`
`
`
`
`
`
`
`
`
`’95, Seattle, WA, USA, vol. 1, pp. 54-59, Jun. 1995.
`
`
`
`
`
`
`
`Jin, H., et a1., “Irregular RepeatiAccumulate Codes,” 2nd Interna—
`
`
`
`
`
`
`
`
`tional Symposium on Turbo Codes, Brest, France, 25 pages, Sep.
`2000.
`
`77 2nd
`
`
`
`
`
`
`
`Interna—
`Jin, H., et al., “Irregular RepeatiAccumulate Codes,
`
`
`
`
`
`
`
`
`
`
`tional Symposium on Turbo Codes &Related Topics, Brest, France, p.
`
`
`
`1-8, Sep. 2000.
`
`
`
`
`
`
`Richardson, T.J., et a1., “Design of Capacity-Approaching Irregular
`
`
`
`
`
`Low-Density Parity-Check Codes,” IEEE Transactions on Informa—
`
`
`
`
`
`tion Theory, 47(2):619-637, Feb. 2001.
`
`
`
`
`
`
`Richardson, T.J., et a1., “Efficient Encoding of Low-Density Parity-
`
`
`
`
`
`
`Check Codes,” IEEE Transactions
`on Information Theory,
`
`
`
`47(2):638-656, Feb. 2001.
`
`
`
`
`
`
`
`
`
`Wiberg, N., et a1., “Codes and Iterative Decoding on General
`
`
`
`
`
`
`Graphs,” Proceedings 1995 IEEE International Symposium on Infor—
`
`
`
`
`
`
`
`
`
`mation Theory (ISIT), Whistler, BC, Canada, p. 468, Sep. 1995.
`
`
`
`
`
`
`
`
`Aji, S.M., et al., “The Generalized Distributive Law,” IEEE Trans—
`
`
`
`
`
`
`actions on Information Theory, 46(2):325-343, Mar. 2000.
`
`
`
`
`
`
`Tanner, R.M., “A Recursive Approach to Low Complexity Codes,”
`
`
`
`
`
`
`IEEE Transactions on Information Theory, 27(5):533-547, Sep.
`1981.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`* cited by examiner
`
`
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 4 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 4 of 13
`
`
`US. Patent
`
`
`
`
`Mar. 29, 2011
`
`
`
`
`
`Sheet 1 015
`
`
`
`US 7,916,781 B2
`
`
`
`FIG.1 (PriorArt)
`
`
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 5 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 5 of 13
`
`
`US. Patent
`
`
`
`
`Mar. 29, 2011
`
`
`
`
`Sheet 2 of5
`
`
`
`US 7,916,781 B2
`
`
`
`
`
`
`
`(.3
`
`O<
`
`N
`
`
`
`QS
`
`IT.
`
`
`
`
`Vl-
`
`
`
`c5
`II
`
`:
`
`x
`
`
`
`|
`
`
`E(
`
`DD_
`
`x
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 6 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 6 of 13
`
`
`US. Patent
`
`
`
`
`Mar. 29, 2011
`
`
`
`
`Sheet 3 of5
`
`
`
`US 7,916,781 B2
`
`
`
`Variable Node
`
`
`Fraction of nodes
`
`degree;
`
`
`
`
`Check Node
`
`degree a
`
`
`
`2 E
`
`
`
`3
`
`E
`
`g
`0:
`
`E
`
`----rO<
`
`{U 1,
`‘.
`
`
`
`:
`i
`E
`
`"-21:04:
`302
`
`
`
`
`
`I,
`
`
`
`302
`o———;l-
`
`.'
`1|
`
`
`\\
`
`
`
`-
`.
`
`‘.
`:
`
`----fi
`
`
`
`\f/
`S
`/:‘\E
`m
`
`
`
`E
`=
`
`
`
`i
`.‘u
`
`
`
`FIG. 3
`
`
`
`1‘2
`
`
`
`f3
`
`
`a"
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 7 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 7 of 13
`
`
`US. Patent
`
`
`
`
`Mar. 29, 2011
`
`
`
`
`Sheet 4 of5
`
`
`
`US 7,916,781 B2
`
`
`
`
`FIG. 5A
`
`
`
`304
`
`
`
`
`
`304
`
`
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 8 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 8 of 13
`
`
`U.S. Patent
`
`
`
`
`Mar. 29, 2011
`
`
`
`
`Sheet 5 of 5
`
`
`
`US 7,916,781 B2
`
`
`
`
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 9 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 9 of 13
`
`
`
`US 7,916,781 B2
`
`
`1
`SERIAL CONCATENATION OF
`
`
`INTERLEAVED CONVOLUTIONAL CODES
`
`
`FORMING TURBO-LIKE CODES
`
`
`
`
`
`
`CROSS-REFERENCE TO RELATED
`
`APPLICATIONS
`
`
`
`
`
`
`
`
`
`
`This application is a continuation of US. application Ser.
`
`
`
`
`
`
`
`
`
`
`No. 11/542,950, filed Oct. 3, 2006 now US. Pat. No. 7,421,
`
`
`
`
`
`
`
`032, which is a continuation of US. application Ser. No.
`
`
`
`
`
`
`
`
`
`09/861,102, filedMay 18, 2001,nowU.S. Pat. No. 7,1 16,710,
`
`
`
`
`
`
`
`
`which claims the priority ofUS. Provisional Application Ser.
`
`
`
`
`
`
`
`No. 60/205,095, filed May 18, 2000, and is a continuation-
`
`
`
`
`
`
`
`
`in-part ofU.S. application Ser. No. 09/922,852, filedAug. 18,
`
`
`
`
`
`
`
`
`
`2000, now US. Pat. No. 7,089,477. The disclosure of the
`
`
`
`
`
`
`
`
`prior applications are considered part of (and are incorporated
`
`
`
`
`
`
`
`by reference in) the disclosure of this application.
`GOVERNMENT LICENSE RIGHTS
`
`
`
`
`
`
`
`
`
`
`
`
`
`The US. Government has a paid-up license in this inven-
`
`
`
`
`
`
`
`
`
`tion and the right in limited circumstances to require the
`
`
`
`
`
`
`
`
`patent owner to license others on reasonable terms as pro-
`
`
`
`
`
`
`
`vided for by the terms of Grant No. CCR-9804793 awarded
`
`
`
`
`by the National Science Foundation.
`BACKGROUND
`
`
`
`
`
`
`
`
`
`
`
`Properties ofa channel affect the amount of data that can be
`
`
`
`
`
`
`
`
`handled by the channel. The so-called “Shannon limit”
`defines the theoretical limit of the amount of data that a
`
`
`
`
`
`
`
`
`
`
`
`
`channel can carry.
`
`
`
`
`
`
`
`
`Different techniques have been used to increase the data
`
`
`
`
`
`
`
`
`rate that can be handled by a channel. “Near Shannon Limit
`
`
`
`
`
`
`Error-Correcting Coding and Decoding: Turbo Codes,” by
`
`
`
`
`
`
`
`Berrou et al. ICC, pp 1064-1070, (1993), described a new
`
`
`
`
`
`
`
`
`“turbo code” technique that has revolutionized the field of
`
`
`
`
`
`
`
`
`error correcting codes. Turbo codes have sufficient random-
`ness to allow reliable communication over the channel at a
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`high data rate near capacity. However, they still retain suffi-
`
`
`
`
`
`
`
`
`cient structure to allow practical encoding and decoding algo-
`
`
`
`
`
`
`
`
`
`rithms. Still, the technique for encoding and decoding turbo
`
`
`
`
`codes can be relatively complex.
`A standard turbo coder 100 is shown in FIG. 1. A block of
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`k information bits is input directly to a first coder 102. A k bit
`interleaver 106 also receives the k bits and interleaves them
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`prior to applying them to a second coder 104. The second
`
`
`
`
`
`
`
`
`
`coder produces an output that has more bits than its input, that
`
`
`
`
`
`
`
`
`
`
`is, it is a coder with rate that is less than 1. The coders 102,104
`
`
`
`
`
`are typically recursive convolutional coders.
`Three different items are sent over the channel 150: the
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`original kbits, first encoded bits 110, and second encoded bits
`
`
`
`
`
`
`
`
`
`
`112. At the decoding end, two decoders are used: a first
`constituent decoder 160 and a second constituent decoder
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`162. Each receives both the original k bits, and one of the
`
`
`
`
`
`
`
`
`encoded portions 110, 112. Each decoder sends likelihood
`estimates of the decoded bits to the other decoders. The esti-
`
`
`
`
`
`
`
`
`mates are used to decode the uncoded information bits as
`
`
`
`
`
`
`
`
`
`
`
`
`
`corrupted by the noisy channel.
`SUMMARY
`
`
`
`
`
`
`
`
`
`A coding system according to an embodiment is config-
`
`
`
`
`
`
`
`
`
`ured to receive a portion of a signal to be encoded, for
`
`
`
`
`
`
`
`
`example, a data block including a fixed number of bits. The
`
`
`
`
`
`
`
`
`coding system includes an outer coder, which repeats and
`
`
`
`
`
`
`
`
`
`scrambles bits in the data block. The data block is apportioned
`
`10
`
`
`
`15
`
`
`
`20
`
`25
`
`
`
`30
`
`
`
`35
`
`
`
`40
`
`
`
`45
`
`
`
`50
`
`
`
`55
`
`
`
`60
`
`
`
`65
`
`
`
`
`2
`
`
`
`
`
`
`
`
`into two or more sub-blocks, and bits in different sub-blocks
`
`
`
`
`
`
`are repeated a different number of times according to a
`
`
`
`
`
`
`
`
`selected degree profile. The outer coder may include a
`
`
`
`
`
`
`
`repeater with a variable rate and an interleaver. Alternatively,
`
`
`
`
`
`
`
`the outer coder may be a low-density generator matrix
`
`
`(LDGM) coder.
`
`
`
`
`
`
`
`
`The repeated and scrambled bits are input to an inner coder
`
`
`
`
`
`
`
`
`
`that has a rate substantially close to one. The inner coder may
`
`
`
`
`
`
`include one or more accumulators that perform recursive
`
`
`
`
`
`
`
`
`modulo two addition operations on the input bit stream.
`
`
`
`
`
`
`
`
`
`The encoded data output from the inner coder may be
`transmitted on a channel and decoded in linear time at a
`
`
`
`
`
`
`
`
`
`
`
`
`
`destination using iterative decoding techniques. The decod-
`
`
`
`
`
`
`
`ing techniques may be based on a Tanner graph representation
`of the code.
`
`
`
`
`
`
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`
`
`
`
`
`
`
`
`
`
`
`
`FIG. 1 is a schematic diagram of a prior “turbo code”
`
`system.
`
`
`
`
`
`FIG. 2 is a schematic diagram of a coder according to an
`embodiment.
`
`
`
`
`
`
`
`
`FIG. 3 is a Tanner graph for an irregular repeat and accu-
`
`
`
`mulate (IRA) coder.
`
`
`
`
`
`FIG. 4 is a schematic diagram of an IRA coder according to
`an embodiment.
`
`
`
`
`
`
`
`
`FIG. 5A illustrates a message from a variable node to a
`
`
`
`
`
`
`check node on the Tanner graph of FIG. 3.
`
`
`
`
`
`
`
`FIG. 5B illustrates a message from a check node to a
`
`
`
`
`
`
`variable node on the Tanner graph of FIG. 3.
`
`
`
`
`
`FIG. 6 is a schematic diagram of a coder according to an
`alternate embodiment.
`
`
`
`
`
`
`
`FIG. 7 is a schematic diagram of a coder according to
`another alternate embodiment.
`
`
`
`
`
`
`DETAILED DESCRIPTION
`
`
`
`
`
`
`
`
`
`
`FIG. 2 illustrates a coder 200 according to an embodiment.
`
`
`
`
`
`
`
`
`
`The coder 200 may include an outer coder 202, an interleaver
`
`
`
`
`
`
`
`
`
`
`204, and inner coder 206. The coder may be used to format
`
`
`
`
`
`
`
`blocks of data for transmission, introducing redundancy into
`
`
`
`
`
`
`
`
`
`
`the stream of data to protect the data from loss due to trans-
`
`
`
`
`
`
`
`
`mission errors. The encoded data may then be decoded at a
`
`
`
`
`
`
`
`
`
`destination in linear time at rates that may approach the chan-
`
`
`nel capacity.
`The outer coder 202 receives the uncoded data. The data
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`may be partitioned into blocks of fixed size, say k bits. The
`
`
`
`
`
`
`
`
`outer coder may be an (n,k) binary linear block coder, where
`
`
`
`
`
`
`
`
`n>k. The coder accepts as input a block u of k data bits and
`
`
`
`
`
`
`produces an output block v of 11 data bits. The mathematical
`
`
`
`
`
`relationship between u and v is v:TOu, where T0 is an n><k
`
`
`
`
`
`
`matrix, and the rate of the coder is k/n.
`
`
`
`
`
`
`
`
`
`The rate of the coder may be irregular, that is, the value of
`
`
`
`
`
`
`
`
`
`
`T0 is not constant, and may differ for sub-blocks of bits in the
`
`
`
`
`
`
`
`
`
`data block. In an embodiment, the outer coder 202 is a
`
`
`
`
`
`
`
`
`repeater that repeats the k bits in a block a number of times q
`
`
`
`
`
`
`
`
`
`
`to produce a block with 11 bits, where n:qk. Since the repeater
`
`
`
`
`
`
`
`
`
`has an irregular output, different bits in the block may be
`
`
`
`
`
`
`
`repeated a different number of times. For example, a fraction
`
`
`
`
`
`
`
`
`
`of the bits in the block may be repeated two times, a fraction
`
`
`
`
`
`
`
`
`
`
`of bits may be repeated three times, and the remainder of bits
`
`
`
`
`
`
`
`
`may be repeated four times. These fractions define a degree
`
`
`
`
`sequence, or degree profile, of the code.
`
`
`
`
`
`
`
`
`The inner coder 206 may be a linear rate-1 coder, which
`
`
`
`
`
`
`
`
`means that the n-bit output block x can be written as x:T,w,
`
`
`
`
`
`
`
`
`where T,is a nonsingular n><n matrix. The inner coder 210 can
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 10 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 10 of 13
`
`
`
`
`3
`
`
`
`
`
`
`
`
`have a rate that is close to 1, e.g., within 50%, more preferably
`
`
`
`
`
`
`
`10% and perhaps even more preferably within 1% of 1.
`
`
`
`
`
`
`
`In an embodiment, the inner coder 206 is an accumulator,
`
`
`
`
`
`
`
`
`
`which produces outputs that are the modulo two (mod-2)
`
`
`
`
`
`
`
`partial sums of its inputs. The accumulator may be a truncated
`rate-1 recursive convolutional coder with the transfer func-
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`tion 1/(1 +D). Such an accumulator may be considered a block
`
`
`
`
`
`
`
`
`
`
`
`coder whose input block [x1,
`.
`, xn] and output block
`.
`.
`
`
`
`
`
`
`.
`, yn] are related by the formula
`[y1, .
`.
`Y1:X1
`
`
`Y2 :x177x2
`
`
`
`Y3 35177269953
`
`
`
`US 7,916,781 B2
`
`4
`
`
`
`
`
`
`
`
`
`accumulator code. The interleaver 204 in FIG. 2 may be
`
`
`
`
`
`
`
`
`excluded due to the randomness already present in the struc-
`ture of the LDGM code.
`
`
`
`
`
`
`
`
`
`
`If the permutation performed in permutation block 310 is
`
`
`
`
`
`
`
`
`
`fixed, the Tanner graph represents a binary linear block code
`
`
`
`
`
`
`
`
`
`
`
`.
`with k information bits (ul,
`.
`, uk) and r parity bits
`.
`
`
`
`
`
`
`
`
`
`
`
`.
`.
`.
`, x,), as follows. Each of the information bits is
`(x1,
`
`
`
`
`
`
`
`
`associated with one ofthe information nodes 302, and each of
`
`
`
`
`
`
`
`
`
`
`the parity bits is associated with one of the parity nodes 306.
`
`
`
`
`
`
`
`
`The value of a parity bit is determined uniquely by the con-
`dition that the mod-2 sum of the values of the variable nodes
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`connected to each of the check nodes 304 is zero. To see this,
`
`
`
`
`
`
`
`
`
`set x0:0. Then if the values ofthe bits on the ra edges coming
`
`
`
`
`out the permutation box are
`
`R
`
`£71
`Xj = Xj71+ 2 W121)“;
`
`10
`
`
`
`15
`
`
`
`20
`
`. 6936,,
`yn:xl@x2@x3@. .
`
`
`
`
`
`
`
`
`
`
`
`where “69” denotes mod-2, or exclusive-OR (XOR), addition.
`
`
`
`
`
`
`
`
`An advantage of this system is that only mod-2 addition is
`
`
`
`
`
`
`
`necessary for the accumulator. The accumulator may be
`
`
`
`
`
`
`
`
`
`embodied using only XOR gates, which may simplify the
`
`design.
`
`
`
`
`
`
`
`
`
`
`The bits output from the outer coder 202 are scrambled
`
`
`
`
`
`
`
`
`
`
`before they are input to the inner coder 206. This scrambling
`
`
`
`
`
`
`
`may be performed by the interleaver 204, which performs a
`
`
`
`
`
`pseudo-random permutation of an input block v, yielding an
`
`
`
`
`
`
`output block w having the same length as v.
`
`
`
`
`
`
`The serial concatenation of the interleaved irregular repeat
`
`
`
`
`
`
`
`
`code and the accumulate code produces an irregular repeat
`
`
`
`
`
`
`
`
`
`and accumulate (IRA) code. An IRA code is a linear code, and
`
`
`
`
`
`
`
`
`as such, may be represented as a set of parity checks. The set
`
`
`
`
`
`
`
`of parity checks may be represented in a bipartite graph,
`
`
`
`
`
`
`
`
`called the Tanner graph, of the code. FIG. 3 shows a Tanner
`
`
`
`
`
`
`
`
`
`graph 300 of an IRA code with parameters (f1,
`.
`.
`, f}; a),
`.
`
`
`
`
`
`
`
`
`
`where 13:0, Zifi:1 and “a” is a positive integer. The Tanner
`
`
`
`
`
`
`
`
`
`graph includes two kinds of nodes: variable nodes (open
`
`
`
`
`
`
`
`
`
`circles) and check nodes (filled circles). There are k variable
`
`
`
`
`
`
`
`
`
`nodes 302 on the left, called information nodes. There are r
`
`
`
`
`
`
`
`
`
`variable nodes 306 on the right, called parity nodes. There are
`
`
`
`
`
`
`
`r:(kZl.ifl.)/a check nodes 304 connected between the informa-
`
`
`
`
`
`
`
`
`
`tion nodes and the parity nodes. Each information node 3 02 is
`connected to a number of check nodes 304. The fraction of
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`information nodes connected to exactly i check nodes is fi.
`
`
`
`
`
`
`
`
`For example, in the Tanner graph 300, each of the f2 informa-
`
`
`
`
`
`
`
`
`tion nodes are connected to two check nodes, corresponding
`
`
`
`
`
`
`
`to a repeat of q:2, and each of the f3 information nodes are
`
`
`
`
`
`
`connected to three check nodes, corresponding to q:3.
`
`
`
`
`
`
`
`
`Each check node 304 is connected to exactly “a” informa-
`
`
`
`
`
`
`
`tion nodes 302. In FIG. 3, a:3. These connections can be
`
`
`
`
`
`
`made in many ways, as indicated by the arbitrary permutation
`
`
`
`
`
`
`
`
`
`of the ra edges joining information nodes 302 and check
`
`
`
`
`
`
`
`
`nodes 304 in permutation block 310. These connections cor-
`
`
`
`
`
`
`respond to the scrambling performed by the interleaver 204.
`
`
`
`
`
`
`
`
`In an alternate embodiment, the outer coder 202 may be a
`
`
`
`
`
`
`
`low-density generator matrix (LDGM) coder that performs
`
`
`
`
`
`
`
`
`an irregular repeat of the k bits in the block, as shown in FIG.
`
`
`
`
`
`
`
`
`4. As the name implies, an LDGM code has a sparse (low-
`
`
`
`
`
`
`
`
`density) generator matrix. The IRA code produced by the
`coder 400 is a serial concatenation ofthe LDGM code and the
`
`
`
`
`
`
`
`25
`
`
`
`30
`
`
`
`35
`
`
`
`40
`
`
`
`45
`
`
`
`50
`
`
`
`55
`
`
`
`60
`
`
`
`65
`
`
`
`.
`
`.
`
`.
`
`
`
`
`
`
`
`
`
`
`
`
`, vm), then we have the recursive formula for j:
`(V1,
`.
`
`
`
`
`
`
`
`.
`, r. This is in effect the encoding algorithm.
`1, 2, .
`
`
`
`
`
`
`
`Two types of IRA codes are represented in FIG. 3, a non-
`
`
`
`
`
`
`systematic version and a systematic version. The nonsystem-
`
`
`
`
`
`
`
`atic version is an (r,k) code, in which the codeword corre-
`
`
`
`
`
`
`
`
`sponding to the information bits (ul, .
`.
`, uk) is (x1, .
`.
`.
`, x, .
`.
`
`
`
`
`
`
`
`The systematic version is a (k+r, k) code, in which the code-
`
`
`
`
`
`
`
`wordis(u1, .. .,uk;x1, .. .,x4.
`
`
`
`
`The rate of the nonsystematic code is
`
`
`
`
`
`
`
`a
`
`Rn 5y: —
`
`1
`Eff;
`
`
`
`
`
`The rate of the systematic code is
`
`a
`
`
`Rsys =
`
`a+zmi
`
`
`
`
`
`
`
`
`
`
`
`
`
`For example, regular repeat and accumulate (RA) codes
`
`
`
`
`
`
`
`can be considered nonsystematic IRA codes with a:1 and
`
`
`
`
`
`
`
`
`
`exactly one fl. equal to 1, say fq:1, and the rest zero, in which
`
`
`
`VlSyS
`simplifies to R:1/q.
`case R
`
`
`
`
`
`
`
`
`
`The IRA code may be represented using an alternate nota-
`
`
`
`
`
`
`
`tion. Let Al. be the fraction of edges between the information
`
`
`
`
`
`
`
`
`
`
`nodes 302 and the check nodes 304 that are adjacent to an
`
`
`
`
`
`
`information node of degree i, and let pl. be the fraction of such
`
`
`
`
`
`
`
`
`
`
`edges that are adjacent to a check node of degree i+2 (i.e., one
`
`
`
`
`
`
`
`that is adjacent to i information nodes). These edge fractions
`
`
`
`
`
`
`
`
`
`
`may be used to represent the IRA code rather than the corre-
`
`
`
`
`
`
`
`
`sponding node fractions. Define }t(x):2i}tixi'l and p(x):
`
`Zipixi'l to be
`
`
`
`
`
`
`
`
`
`
`
`the generating functions ofthese sequences. The pair (2», p) is
`
`
`
`
`
`
`called a degree distribution. For L(x):2ifixi,
`
`
`
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 11 of 13
`Case 6:20-cv-01042-ADA Document 1-3 Filed 11/11/20 Page 11 of 13
`
`
`
`US 7,916,781 B2
`
`
`6
`
`
`
`
`
`
`
`
`
`An erasure E output means that the receiver does not know
`
`
`
`
`
`
`
`how to demodulate the output. Similarly, when 1 is transmit-
`
`
`
`
`
`
`
`
`
`
`ted, the receiver can receive either 1 or E. Thus, for the BEC,
`
`
`ye{0, E, 1}, and
`
`
`
`
`
`"10(14):
`
`
`+00 if y=0
`
`
`if y=E
`0
`
`
`if y=1
`—oo
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`In the BSC, there are two possible inputs (0,1) and two
`
`
`
`
`
`
`
`
`possible outputs (0, 1). The BSC is characterized by a set of
`
`
`
`
`
`
`
`conditional probabilities relating all possible outputs to pos-
`
`
`
`
`
`
`
`
`sible inputs. Thus, for the BSC ye{0, 1},
`
`
`log
`
`
`p
`m0(14) =
`
`1-p
`—log
`
`p
`
`if
`
`
`
`:0
`
`
`
`y
`
`if y = 1
`
`
`
`
`
`
`
`
`
`
`In the AWGN, the discrete-time input symbols X take their
`
`
`
`
`
`
`
`values in a finite alphabet while channel output symbole can
`
`
`
`
`
`
`
`
`
`take any values along the real line. There is assumed to be no
`distortion or other effects other than the addition of white
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Gaussian noise. In an AWGN with a Binary Phase Shift
`
`
`
`
`
`
`
`
`Keying (BPSK) signaling which maps 0 to the symbol with
`
`
`
`
`
`
`
`
`
`amplitude V3 and 1 to the symbol with amplitude 4/3,
`
`
`
`output yeR, then
`
`mowHyVEflvo
`
`
`
`
`
`
`
`where NO/2 is the noise power spectral density.
`
`
`
`
`
`
`
`The selection of a degree profile for use in a particular
`
`
`
`
`
`
`transmission channel is a design parameter, which may be
`
`
`
`
`
`
`
`affected by various attributes of the channel. The criteria for
`
`
`
`
`
`
`
`
`selecting a particular degree profile may include, for example,
`
`
`
`
`
`
`
`
`
`
`
`the type of channel and the data rate on the channel. For
`
`
`
`
`
`
`
`
`
`example, Table 1 shows degree profiles that have been found
`
`
`
`
`
`
`
`
`to produce good results for an AWGN channel model.
`
`10
`
`
`
`15
`
`
`
`20
`
`25
`
`
`
`30
`
`
`
`35
`
`
`
`40
`
`
`
`
`5
`
`
`
`
`
`
`The rate of the systematic IRA code given by the
`
`
`
`1
`o
`o
`L(x)=f/I(t)dt/f/I(t)dt
`
`ZPj/j 71
`
`Rate 2 1+
`
`
`
`./
`Aj/J'
`
`
`j Z
`
`
`
`
`degree distribution is given by
`
`
`
`
`
`
`“Belief propagation” on the Tanner Graph realization may
`
`
`
`
`
`
`
`
`
`be used to decode IRA codes. Roughly speaking, the belief
`
`
`
`
`
`
`
`propagation decoding technique allows the messages passed
`
`
`
`
`
`
`
`
`on an edge to represent posterior densities on the bit associ-
`
`
`
`
`
`
`
`ated with the variable node. A probability density on a bit is a
`
`
`
`
`
`
`
`
`pair of non-negative real numbers p(O), p(1) satisfying p(0)+
`
`
`
`
`
`
`
`
`p(1):1, where p(O) denotes the probability of the bit being 0,
`
`
`
`
`
`
`
`
`p(1) the probability of it being 1. Such a pair can be repre-
`
`
`
`
`
`
`
`sented by its log likelihood ratio, m:log(p(0)/p(1)). The out-
`
`
`
`
`
`
`
`going message from a variable node u to a check node v
`
`
`
`
`
`
`
`represents information about u, and a message from a check
`
`
`
`
`
`
`node u to a variable node v represents information about u, as
`
`
`
`
`
`shown in FIGS. 5A and 5B, respectively.
`
`
`
`
`
`
`
`The outgoing message from a node u to a node v depends
`
`
`
`
`
`
`
`on the incoming messages from all neighbors w ofu except v.
`
`
`
`
`
`
`
`If u is a variable message node, this outgoing message is
`
`watv
`m(u a v) = 2mm a 14) + ”1004)
`
`
`
`
`
`
`
`
`
`where m0(u) is the log-likelihood message associated with u.
`
`
`
`
`
`
`If u is a check node, the corresponding formula is
`
`
`m(u a v)
`m(w a 14)
`
`
`WV
`
`
`
`2
`2
`— 1—1 tanh
`
`
`tanh
`
`
`
`
`
`
`
`
`
`Before decoding, the messages m(w—>u) and m(uev) are
`
`
`
`
`
`
`
`initialized to be zero, and m0(u) is initialized to be the log-
`likelihood ratio based on the channel received information. If
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`the channel is memoryless, i.e., each channel output only
`
`
`
`
`
`
`
`relies on its input, and y is the output of the channel code bit
`
`
`
`
`
`
`u, then m0(u):log(p(u:0|y)/p(u:1|y)). After this initializa-
`
`
`
`
`
`
`
`
`
`tion, the decoding process may run in a fully parallel and local
`
`
`
`
`
`
`
`manner. In each iteration, every variable/check node receives
`
`
`
`
`
`
`
`
`messages from its neighbors, and sends back updated mes-
`
`
`
`
`
`
`
`sages. Decoding is terminated after a fixed number of itera-
`
`
`
`
`
`
`
`
`
`tions or detecting that all the constraints are satisfied. Upon
`
`
`
`
`
`
`
`termination, the decoder outputs a decoded sequence based
`
`
`
`on the messages
`
`m(u) = Z wm(w a 14).
`
`
`
`
`
`
`
`
`Thus, on various channels, iterative decoding only differs
`
`
`
`
`
`
`
`
`
`in the initial messages m0(u). For example, consider three
`
`
`
`
`
`
`memoryless channel models: a binary erasure channel
`
`
`
`
`
`
`
`(BEC); a binary symmetric channel (BSC); and an additive
`
`
`
`
`
`white Gaussian noise (AGWN) channel.
`
`
`
`
`
`
`
`
`
`
`
`In the BEC, there are two inputs and three outputs. When 0
`
`
`
`
`
`
`
`
`is transmitted, the receiver canreceive either 0 or an erasure E.
`
`50
`
`
`
`55
`
`
`
`60
`
`
`
`65
`
`
`
`45
`
`
`
`a
`
`
`
`TABLE 1
`
`
`2
`
`
`0.139025
`
`0.2221555
`
`
`
`0.638820
`
`
`
`
`
`0.333364
`1.1840
`
`1.1981
`
`0.190
`
`—0.4953
`
`
`
`3
`
`
`0.078194
`0.128085
`0.160813
`0.036178
`
`
`
`
`
`
`
`
`
`
`
`0.108828
`0.487902
`
`0.333223
`1.2415
`1.2607
`—0.250
`
`—0.4958
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`4
`
`
`0.054485
`0.104315
`
`0.126755
`0.229816
`0.016484
`
`0.450302
`0.017842
`0.333218
`1.2615
`1.2780
`—0.371
`
`—0.4958
`
`
`
`
`
`
`A2
`
`A3
`
`A5
`
`A6
`
`A10
`
`A11
`
`A12
`
`A13
`
`A14
`
`A16
`
`A27
`
`A28
`
`Rate
`
`oGA
`
`0*
`
`(Eb/N0) * (dB)
`
`S.L. (dB)
`
`
`
`
`
`
`
`
`
`
`
`
`
`Table 1 shows degree profiles yielding codes of rate
`
`
`
`
`
`
`
`
`approximately 1/3 for the AWGN channel and with a:2, 3, 4.
`
`
`
`
`
`
`
`
`For each sequence, the Gaussian approximation noise thresh-
`
`
`
`
`
`
`
`
`
`old, the actual sum-product decoding thres