Low Density Parity Check (LDPC) Codes

  • LDPC codes are a class of linear block codes characterized by sparse parity check matrices ‘H’. ‘H’ has low density of 1’s.
  • LDPC codes were originally invented by Robert Gallager in the early 1960s but were largely ignore till they were ‘rediscovered’ in the mid 90s by MacKay.
  • Sparseness of ‘H’ can yield large minimum distance ‘D min’ and reduces coding complexity.
  • can perform within 0.0045 dB of Shannon limit.

Representations for LDPC Codes
Basically there are two different possibilities to represent LDPC codes. Like all linear block codes they can be described via matrices. The second possibility is a graphical representation.

1. Matrix Representation
Lets look at an example for a LDPC matrix first. The matrix defined in equation (1) is a parity check matrix with dimension n x m for a (8,4) code. We can now define 2 numbers describing these matrix. ‘Wt’ for the number of 1’s in each row and ‘Wc’ for the columns. For a matrix to be called low-density the two conditions Wc << n and Wt << m must be satisfied. In order to do this, the parity check matrix should usually be very large, so the example matrix below can’t be really called low-density.


2. Graphical Representation
Tanner introduced an effective graphical representation for LDPC codes. Not only provide these graphs a complete representation of the code, they also help to describe the decoding algorithm. Tanner graphs are bipirtate graphs. That means that the nodes of the graph are separated into two distinctive sets and edges are only connecting nodes of two different types. The two types of nodes in a tanner graph are called variable nodes (v-nodes) and check nodes (c-nodes). Below figure is an example of such a Tanner graph and represents the same code as the matrix in (1). The creation of such a graph is rather straight forward. It consists of m check nodes (the number of parity bits) and n variable nodes (the number of bits in a codeword).

The Equations for a simple LDPC code with n=12

There are actually only 7 independent equations so there are 7 parity digits.

The Parity check matrix for a simple LDPC code

Note above that each symbol is contained in 3 equations and each equation involves 4 code symbols.

Graph for simple LDPC code

Performance and Complexity
The feature of LDPC codes to perform near the Shannon limit of a channel exists only for large block lengths. For example there have been simulations that perform within
0.04 dB of the Shannon limit at a bit error rate of 10^(-6) with a block length of 10^(7). An interesting fact is that those high performance codes are irregular. The large block length results also in large parity-check and generator matrices. The complexity of multiplying a codeword with a matrix depends on the amount of 1’s in the matrix. If we put the sparse matrix H in the form [P^(T)I] via Gaussian elimination the generator matrix G can be calculated as G=[IP]. The sub-matrix P is generally not sparse so that the encoding complexity will be quite high.

Since the complexity grows in O(n^2) even sparse matrices don’t result in a good performance if the block length gets very high. So iterative decoding (and encoding) algorithms are used. Those algorithms perform local calculations and pass those local results via messages. This step is typically repeated several times. The term "local calculations" already indicates that a divide and conquere strategy, which separates a complex problem into manageable sub-problems, is realized. A sparse parity check matrix now helps this algorithms in several ways. First it helps to keep both the local calculations simple and also reduces the complexity of combining the sub-problems by reducing the number of needed messages to exchange all the information. Furthermore it was observed that iterative decoding algorithms of sparse codes perform very close to the optimal maximum likelihood decoder.

Decoding LDPC Codes
Decoding is quite simple – the most powerful algorithm is to use belief propagation – each check node in generation graph (or generation matrix, which is just a different representation of the same entity) sends to codeword nodes what they belief a valid bit with calculated probability, after error probably that given bit is zero or one becomes less than requested number (according to Shannon’s theorem code, which allows to reduce error probabily infinitely, always exist for given rate and communication capacity), codeword calcualtion (decoding) is completed. There are two mechanisms – hard and soft decision algos, the former is simpler, but the latter frequently is faster. Let me show an example of the hard decision algorithm

Let generation matrix to be this (not very sparse) set:
0 1 0 1 1 0 0 1
1 1 1 0 0 1 0 0
0 0 1 0 0 1 1 1
1 0 0 1 1 0 1 0
And source codeword is:
1 0 0 1 0 1 0 1
Check word, calculated my multiplication of matrix and codeword is:
0 0 0 0
Let’s during transmission of the codeword and check word over the channel codeword was changed to this (secod bit changed):
1 1 0 1 0 1 0 1
Here is a generation graph (originally proposed by Tanner) of the given matrix:
\

Here starts decoding algorithm, where each codeword node C first sends its bit
to each check node F.
F0 node will receive 1 1 0 1 bits from C1, C3, C4 and C7 accordingly.
F1 node will receive 1 1 0 1 bits
F2 node will receive 0 1 0 1 bits
F3 node will receive 1 1 0 0 bits

Next step is to calculate the answer for each code node.
Received check word is 0 0 0 0 (calculated above), so set of simple equations starts here. Each check node F gets three out of four received bits and XORing (summing modulo 2, since this example works in Galois finite field of power of 1 – GF(1)), and sends to the codeword node a bit it expects to be correct to satisfy received check bit. Here is an example for first check node:


X0 ^ 1 ^ 0 ^ 1 = 0. X0 = 0
1 ^ X1 ^ 0 ^ 1 = 0. X1 = 0
1 ^ 1  ^ X2 ^ 1 = 0. X2 = 1
1 ^ 1 ^ 0 ^ X3 = 0. X3 = 0

Then we send Xi to Ci code bits. After all check nodes are processed, codeword nodes has following set of bits:
C0: 0 from F1, 1 from F3, 1 from originally received codeword.
C1: 0 from F0, 0 from F1, 1 from originally received codeword.
C2: 1 from F1, 0 from F2, 0 from originally received codeword.
C3: 0 from F0, 1 from F3, 1 from originally received codeword.
C4: 1 from F0, 0 from F3, 0 from originally received codeword.
C5: 0 from F1, 1 from F2, 1 from originally received codeword.
C6: 0 from F2, 0 from F3, 0 from originally received codeword.
C7: 1 from F0, 1 from F2, 1 from originally received codeword.

Then using the voting for each bit (i.e. which bit has more ‘votes’ in above table out of three cases), we get a new codeword:
1 0 0 1 0 1 0 1

The same steps then are repeated until cdeword stopped to change. In our case we get it after the first run.

Soft decision algorithm usses essentially the same logic, but it operates with probabilities of the bit to be 1 or 0, each probabilities are recalculated in each run, and after probability is higher than requested value (or error probability is less than requested value), loop stops.

Real world examples use much bigger codewords (up to several thousands of bits), but logic is always the same.

Note: you need a background in linear codes theory to better understand the content of the article.

reference:
http://www.ioremap.net/projects/ldpc/
http://www.csee.wvu.edu/~mvalenti/documents/TurboLDPCTutorial.pdf
http://www.bernh.net/media/download/papers/ldpc.pdf
http://sites.ieee.org/scv-pace/files/2013/06/FMS13-LDPC-Module1.pdf










comments powered by Disqus