The history of coding starts with the seminal work of Claude Shannon on the mathematical theory of communication in 1948. He demonstrated that errors induced by a noisy channel can be reduced to any desired level as long as the information rate is less than the capacity of the channel. The theoretical maximum information transfer rate is called Shannon limit.
In 1962, Gallager proposed a low-density parity-check code in his doctoral dissertation. LDPC codes are defined by a sparse parity-check matrix. They provided near-capacity performance but difficult implementation. Also, the concatenated RS and convolutional codes were considered perfectly suitable for error control coding. Thus, his remarkable thesis was forgotten by coding researchers for almost 20 years. In 1981, Tanner generalized LDPC codes and created a bipartite graph used to represent those codes. However, it was still ignored by coding theorists.
LDPC codes were noticed again by some researchers in the mid-1990’s. They began to investigate codes on graph and iterative decoding. Markey and other researchers discovered the advantage between linear block codes which generated by sparse matrix and iterative decoding based on belief propagation. And by that time the decoding complexity looked achievable. Since that time, a lot of papers have been published and LDPC has become popular so far.