**Backpropagation**is the learning algorithm used in neural networks and is a generalization of the least mean squares algorithm used in linear perceptron. Backpropagation requires a known and expected output value for each input value and therefore it is therefore a supervised learning method.

**How It Works**

Remember backpropagation is a learning algorithm. How does learning occur? Learning is done by changing the weights of the perceptron after each signal have been processed based on the calculated amount of error in the output compared to the expected result.

*Error = Output perceptron – expected result***Calculating the Loss Function**

To understand the backpropagation algorithm, you need to understand the concept of the Loss Function which is also the Cost function. This is loosely the same as the error formula above, but this time we would need to formalize the definition a little. It would still be easy and clear.

^{n}

**The Backpropagation Algorithm**

**x**is entered into the network. This input moves from the input layer through the hidden layers to the output layer and produces an output y.

Backpropagation uses these error values to calculate the gradient of the loss function as they are move back through the network. This gradient, is then used to update the weights of the nodes. The process is repeated again to get another output based on the updated weights. The process of backpropagation is repeated until the error function produces a minimum value.**Summary of the Backpropagation Algorithm**

- Input vector
**x**is given to the network - Input propagates forward to produce an output y in the output layer
- Error function is calculated
- Error is propagated backwards into the network
- Weights are adjusted accordingly
- Repeat the process until the final error is minimum