In the book of Wassernam 1 about learning networks of counter-propagation (Kohonen layer) it is written that there is an optimization algorithm in which all (!?!) Weights are set equal to 1 / sqrt (n), where n is the number of inputs / components of vectors. How to find a winning neuron for learning? After all, if all the weights of all neynors are equal, everyone will have the same value from and to the output!