Talk:Hamming code
This is the talk page for discussing improvements to the Hamming code article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1Auto-archiving period: 90 days |
This level-5 vital article is rated C-class on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||
|
Error Correction with Hamming Codes
[edit]Forward Error Correction (FEC), the ability of receiving station to correct a transmission error, can increase the throughput of a data link operating in a noisy environment. The transmitting station must append information to the data in the form of error correction bits, but the increase in frame length may be modest relative to the cost of re transmission. (sometimes the correction takes too much time and we prefer to re transmit). Hamming codes provide for FEC using a "block parity" mechanism that can be inexpensively implemented. In general, their use allows the correction of single bit errors and detection of two bit errors per unit data, called a code word.
The fundamental principal embraced by Hamming codes is parity. Hamming codes, as mentioned before, are capable of correcting one error or detecting two errors but not capable of doing both simultaneously. You may choose to use Hamming codes as an error detection mechanism to catch both single and double bit errors or to correct single bit error. This is accomplished by using more than one parity bit, each computed on different combination of bits in the data.
The number of parity or error check bits required is given by the Hamming rule, and is a function of the number of bits of information transmitted. The Hamming rule is expressed by the following inequality:
p d + p + 1 < = 2 (1)
Where d is the number of data bits and p is the number of parity bits. The result of appending the computed parity bits to the data bits is called the Hamming code word. The size of the code word c is obviously d+p, and a Hamming code word is described by the ordered set (c,d).
Codes with values of p< =2 are hardly worthwhile because of the overhead involved. The case of p=3 is used in the following discussion to develop a (7,4) code using even parity, but larger code words are typically used in applications. A code where the equality case of Equation 1 holds is called a perfect code of which a (7,4) code is an example.
A Hamming code word is generated by multiplying the data bits by a generator matrix G using modulo-2 arithmetic. This multiplication's result is called the code word vector (c1,c2.c3,.....cn), consisting of the original data bits and the calculated parity bits.
The generator matrix G used in constructing Hamming codes consists of I (the identity matrix) and a parity generation matrix A:
G = [ I : A ]
An example of Hamming code generator matrix:
1 0 0 0 | 1 1 1 G = 0 1 0 0 | 0 1 1 0 0 1 0 | 1 0 1 0 0 0 1 | 1 1 0
The multiplication of a 4-bit vector (d1,d2,d3,d4) by G results in a 7-bit code word vector of the form (d1,d2,d3,d4,p1,p2,p3). It is clear that the A partition of G is responsible for the generation of the actual parity bits. Each column in A represents one parity calculation computed on a subset of d. The Hamming rule requires that p=3 for a (7,4) code, therefore A must contain three columns to produce three parity bits.
If the columns of A are selected so each column is unique, it follows that (p1,p2,p3) represents parity calculations of three distinct subset of d. As shown in the figure below, validating the received code word r, involves multiplying it by a parity check to form s, the syndrome or parity check vector.
T
H = [A | I] |1| |0| | 1 0 1 1 | 1 0 0 | |0| |0| | 1 1 0 1 | 0 1 0 | * |1| = |0| | 1 1 1 0 | 0 0 1 | |0| |0| |0| |1|
H*r = s
If all elements of s are zero, the code word was received correctly. If s contains non-zero elements, the bit in error can be determined by analyzing which parity checks have failed, as long as the error involves only a single bit.
For instance if r=[1011001], s computes to [101], that syndrome ([101]) matches to the third column in H that corresponds to the third bit of r - the bit in error.
OPTIMAL CODING From the practical standpoint of communications, a (7,4) code is not a good choice, because it involves non-standard character lengths. Designing a suitable code requires that the ratio of parity to data bits and the processing time involved to encode and decode the data stream be minimized, a code that efficiently handles 8-bit data items is desirable. The Hamming rule shows that four parity bits can provide error correction for five to eleven data bits, with the latter being a perfect code. Analysis shows that overhead introduced to the data stream is modest for the range of data bits available (11 bits 36% overhead, 8 bits 50% overhead, 5 bits 80% overhead).
A (12,8) code then offers a reasonable compromise in the bit stream . The code enables data link packets to be constructed easily by permitting one parity byte to serve two data bytes.
Debjit dey
Wow
[edit]I just spotted the new Example section. It's really beautiful. Good job to the contributor on that. Deco 21:10, 26 Jun 2005 (UTC)
Inconsistent naming of parity bits.
[edit]In the "General algorithm" section, parity bits are named P1, P2, P4, P8, etc. Then in the "[7,4] Hamming code" section (which links to a main article), parity bits are named P1, P2, P3, P4, etc. This is an inconsistency, yes? I believe the latter naming is more common, so the change should be in the General algorithm section. JohnHagerman (talk) 20:19, 22 July 2019 (UTC)
"Hamming codes with additional parity (SECDED)" - Inconsistency with "Parity Bit" article?
[edit]In the third paragraph of this section it says
- "If the decoder does not attempt to correct errors, it can reliably detect triple bit errors."
According to the Parity Bit article:
- "If an odd number of bits (including the parity bit) are transmitted incorrectly, the parity bit will be incorrect, thus indicating that a parity error occurred in the transmission."
Therefore, I assume:
- If the decoder does not attempt to correct errors, it can reliably detect odd number of bit errors.
As I understand, if the algorithm decides to only check all all parity bits, including the extra parity bit, it can detect an odd number of errors. Am I correct in assuming this? — Preceding unsigned comment added by Limoster (talk • contribs) 13:45, 21 January 2021 (UTC)
- This section is about Hamming codes with an additional parity bit, not about parity bits on their own. The minimum distance of Hamming codes with an additional parity bit is 4. This means that one, two, and three errors can always be reliably detected! In addition, any odd number of errors can be detected by the parity bit. The section has some stylistic issues, but seems mathematically sound to me. ylloh (talk) 12:43, 22 January 2021 (UTC)
- C-Class level-5 vital articles
- Wikipedia level-5 vital articles in Mathematics
- C-Class vital articles in Mathematics
- C-Class Telecommunications articles
- High-importance Telecommunications articles
- C-Class Computer science articles
- High-importance Computer science articles
- WikiProject Computer science articles