When teaching perceptron to recognize letters, he was confronted with the fact that he remembers only the last letter from the set of samples.
Learning process:
The learning cycle is performed several dozen times for the entire pattern array. For each pattern:
In the loop, the input is a vector of pixel values of each letter, the output is compared with the response vector, in which all values are 0 except for a value equal to the position of the letter. Those. for A it is {1, 0, 0, ...}, for B {0, 1, 0, ...}, etc.
Corrected errors by the method of reverse propagation
Weights are updated
Learning of any one letter is normal: after a few repetitions, the value of the desired output neuron becomes almost 1, the rest - almost 0. But if you sequentially iterate through all the letters from A to Z and test on the letter B, for example, then only the neuron corresponding to the last pattern becomes active for training, i.e. Z.
What could be the error?
Update
To simplify, I tested on numbers. 1 hidden layer with 30th neurons (I tried and the 300 result is the same, just approximation to 0 and 1 more) and output layer with 10th. Iterations - from 10 to 100. Perhaps it is the initialization of the weights (from 0.1 to 0.3)? If you apply any letter to the input of the untrained network, the value of each of the hidden neurons is almost 1 (or 1, if there are much more neurons). Those. and A and Z for the perceptron look the same.