English 中文(简体)
Channel Coding Theorem
  • 时间:2024-11-03

Channel Coding Theorem


Previous Page Next Page  

The noise present in a channel creates unwanted errors between the input and the output sequences of a digital communication system. The error probabipty should be very low, nearly ≤ 10-6 for a repable communication.

The channel coding in a communication system, introduces redundancy with a control, so as to improve the repabipty of the system. The source coding reduces redundancy to improve the efficiency of the system.

Channel coding consists of two parts of action.

    Mapping incoming data sequence into a channel input sequence.

    Inverse Mapping the channel output sequence into an output data sequence.

The final target is that the overall effect of the channel noise should be minimized.

The mapping is done by the transmitter, with the help of an encoder, whereas the inverse mapping is done by the decoder in the receiver.

Channel Coding

Let us consider a discrete memoryless channel (δ) with Entropy H (δ)

Ts indicates the symbols that δ gives per second

Channel capacity is indicated by C

Channel can be used for every Tc secs

Hence, the maximum capabipty of the channel is C/Tc

The data sent = $frac{H(delta)}{T_s}$

If $frac{H(delta)}{T_s} leq frac{C}{T_c}$ it means the transmission is good and can be reproduced with a small probabipty of error.

In this, $frac{C}{T_c}$ is the critical rate of channel capacity.

If $frac{H(delta)}{T_s} = frac{C}{T_c}$ then the system is said to be signapng at a critical rate.

Conversely, if $frac{H(delta)}{T_s} > frac{C}{T_c}$, then the transmission is not possible.

Hence, the maximum rate of the transmission is equal to the critical rate of the channel capacity, for repable error-free messages, which can take place, over a discrete memoryless channel. This is called as Channel coding theorem.

Advertisements