Each parity bit is taken from the corresponding D FF then ran through an XOR gate that does the mod 2 addition to output the parity bit. This circuit creates a physical generator polynomial that the message is fed through, creating the encoded message.
Another method of encoding is done through a state table diagram. This is a diagram that tracks which parity bit is output depending on which state it’s in and which state it is transitioning to, which depends on the input bit received. To make our state diagram we first need to understand the parity bit equations above. We will make a 4 state diagram that makes it very simple to understand Convolutional encoding. To illustrate the example above we need a starting state, the start from which we haven’t yet received the message we want encoded. We call this state 00, which correspond to the bits we have received so far, and two bits because our constraint window is k = 3.
Starting in state 00, say we receive a 0, we will stay in this state since we received a 0 and the system is causal, making the parity bits output be 00. Now we receive a 1, we move to the next state 10, which is the order of the message bits received, and output parity bits 11. Now we’re in state 10, we receive a 0, we then move to state 01 and the parity bits will be 11. This is continued for the length of the message, at which the state diagram is reset and awaits a new message to start. You can tell that this type of encoding scheme needs two bits of memory to remember the present bit thats received and the previous bit, while the output still depends on the entered bit. This method of encoding can also be done using circuitry using shift registers and D flip flops as previously mentioned. Shift registers will not be covered in depth due to the complexity of the circuitry.
The receiver doesn’t know the path the encoder took to accurately replicate which state the encoder is in. This is left to the decoder that will be discussed in the next section that goes over maximum likelihood decoding and the Trellis diagram.
Convolutional decoding is most typically done in two ways, maximum likelihood decoding and using a trellis diagram, called Viterbi decoding. The problem is most decoding methods have no idea which series of states the state diagram used to encode the message, and the decoder has no idea whether the encoded message was altered in transmission. Since the decoder has no idea, it needs to make the best guess as to which message was sent. The decoder that picks the best possible or most likely is a maximum likelihood decoder. In order to know whether the encoded message was most likely, we must use the smallest Hamming distance to find the minimum distance between the transmitted and received message. We can only assume that the encoded message didn’t get irreversibly changed in transit as to most likely be another message. The problem with maximum likelihood decoding is that we must...