「Low-density parity-check code」を編集中
この編集を取り消せます。
下記の差分を確認して、本当に取り消していいか検証してください。よろしければ変更を保存して取り消しを完了してください。
最新版 | 編集中の文章 | ||
147行目: | 147行目: | ||
実際問題として、アキュムレータを形成するハードウェアは、符号化プロセス中に再利用される。すなわち、パリティビットの第1セットが生成され、パリティビットが記憶されると、同じアキュムレータハードウェアが、パリティビットの次のセットを生成するために使用される。 | 実際問題として、アキュムレータを形成するハードウェアは、符号化プロセス中に再利用される。すなわち、パリティビットの第1セットが生成され、パリティビットが記憶されると、同じアキュムレータハードウェアが、パリティビットの次のセットを生成するために使用される。 | ||
− | == | + | ==Decoding== |
− | + | As with other codes, the [[maximum likelihood decoding]] of an LDPC code on the [[binary symmetric channel]] is an [[NP-complete]] problem. Performing optimal decoding for a NP-complete code of any useful size is not practical. | |
− | + | However, sub-optimal techniques based on iterative [[belief propagation]] decoding give excellent results and can be practically implemented. The sub-optimal decoding techniques view each parity check that makes up the LDPC as an independent single parity check (SPC) code. Each SPC code is decoded separately using [[Soft-in soft-out decoder|soft-in-soft-out]] (SISO) techniques such as [[Soft output Viterbi algorithm|SOVA]], [[BCJR algorithm|BCJR]], [[Maximum a posteriori estimation|MAP]], and other derivates thereof. The soft decision information from each SISO decoding is cross-checked and updated with other redundant SPC decodings of the same information bit. Each SPC code is then decoded again using the updated soft decision information. This process is iterated until a valid code word is achieved or decoding is exhausted. This type of decoding is often referred to as sum-product decoding. | |
− | + | The decoding of the SPC codes is often referred to as the "check node" processing, and the cross-checking of the variables is often referred to as the "variable-node" processing. | |
− | + | In a practical LDPC decoder implementation, sets of SPC codes are decoded in parallel to increase throughput. | |
− | + | In contrast, belief propagation on the [[binary erasure channel]] is particularly simple where it consists of iterative constraint satisfaction. | |
− | + | For example, consider that the valid codeword, 101011, from the example above, is transmitted across a binary erasure channel and received with the first and fourth bit erased to yield ?⁠01?⁠11. Since the transmitted message must have satisfied the code constraints, the message can be represented by writing the received message on the top of the factor graph. | |
− | + | In this example, the first bit cannot yet be recovered, because all of the constraints connected to it have more than one unknown bit. In order to proceed with decoding the message, constraints connecting to only one of the erased bits must be identified. In this example, only the second constraint suffices. Examining the second constraint, the fourth bit must have been zero, since only a zero in that position would satisfy the constraint. | |
− | + | This procedure is then iterated. The new value for the fourth bit can now be used in conjunction with the first constraint to recover the first bit as seen below. This means that the first bit must be a one to satisfy the leftmost constraint. | |
− | [[ | + | [[Image:ldpc code fragment factor graph w erasures decode step 2.svg|none]] |
− | + | Thus, the message can be decoded iteratively. For other channel models, the messages passed between the variable nodes and check nodes are [[real number]]s, which express probabilities and likelihoods of belief. | |
− | + | This result can be validated by multiplying the corrected codeword '''r''' by the parity-check matrix '''H''': | |
− | + | :<math>\mathbf{z} = \mathbf{Hr} | |
− | = | + | = |
− | \ begin {pmatrix} | + | \begin{pmatrix} |
− | + | 1 & 1 & 1 & 1 & 0 & 0 \\ | |
− | + | 0 & 0 & 1 & 1 & 0 & 1 \\ | |
− | + | 1 & 0 & 0 & 1 & 1 & 0 \\ | |
− | \ end {pmatrix} | + | \end{pmatrix} |
− | \ begin {pmatrix} | + | \begin{pmatrix} |
1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 1 \\ | 1 \\ 0 \\ 1 \\ 0 \\ 1 \\ 1 \\ | ||
− | \ end {pmatrix} | + | \end{pmatrix} |
= | = | ||
− | \ begin {pmatrix} | + | \begin{pmatrix} |
0 \\ 0 \\ 0 \\ | 0 \\ 0 \\ 0 \\ | ||
− | \ end {pmatrix} | + | \end{pmatrix}. |
− | + | </math> | |
− | + | Because the outcome '''z''' (the [[Decoding methods#Syndrome decoding|syndrome]]) of this operation is the three × one zero vector, the resulting codeword '''r''' is successfully validated. | |
− | + | After the decoding is completed, the original message bits '101' can be extracted by looking at the first 3 bits of the codeword. | |
− | + | While illustrative, this erasure example does not show the use of soft-decision decoding or soft-decision message passing, which is used in virtually all commercial LDPC decoders. | |
− | === | + | === Updating node information === |
− | + | In recent years, there has also been a great deal of work spent studying the effects of alternative schedules for variable-node and constraint-node update. The original technique that was used for decoding LDPC codes was known as ''flooding''. This type of update required that, before updating a variable node, all constraint nodes needed to be updated and vice versa. In later work by Vila Casado ''et al.'', alternative update techniques were studied, in which variable nodes are updated with the newest available check-node information. | |
− | + | The intuition behind these algorithms is that variable nodes whose values vary the most are the ones that need to be updated first. Highly reliable nodes, whose [[log-likelihood ratio]] (LLR) magnitude is large and does not change significantly from one update to the next, do not require updates with the same frequency as other nodes, whose sign and magnitude fluctuate more widely. | |
− | + | These scheduling algorithms show greater speed of convergence and lower error floors than those that use flooding. These lower error floors are achieved by the ability of the Informed Dynamic Scheduling (IDS) | |
− | + | When nonflooding scheduling algorithms are used, an alternative definition of iteration is used. For an (''n'', ''k'') LDPC code of rate ''k''/''n'', a full ''iteration'' occurs when ''n'' variable and ''n'' − ''k'' constraint nodes have been updated, no matter the order in which they were updated. | |
− | === | + | === Lookup table decoding === |
− | [[ | + | It is possible to decode LDPC codes on a relatively low-power microprocessor by the use of [[lookup table]]s. |
− | + | While codes such as the LDPC are generally implemented on high-power processors, with long block lengths, there are also applications which use lower-power processors and short block lengths (1024). | |
− | + | Therefore, it is possible to precalculate the output bit based upon predetermined input bits. A table is generated which contains ''n'' entries (for a block length of 1024 bits, this would be 1024 bits long), and contains all possible entries for different input states (both errored and nonerrored). | |
− | + | As a bit is input, it is then added to a FIFO register, and the value of the FIFO register is then used to look up in the table the relevant output from the precalculated values. | |
− | + | By this method, very high iterations can be used, with little processor overhead, the only cost being that of the memory for the lookup table, such that LDPC decoding is possible even on a 4.0 MHz PIC chip. | |
==コード構築== | ==コード構築== |