I have a Kohonen neural network consisting of ten neurons and a heap of inputs. The spectral portraits of the words are sent to the input (the words from zero to ten were recorded on the recorder, and then converted into spectral portraits). Before serving, portraits are normalized within [-1; 1] [-1; 1] .

The matrix of scales is filled randomly under the condition:

(one) initial initialization condition

where M is the length of the input vector

I submit some vector, consider R for it and choose a neuron with the smallest R ( x - input parameters):

(2) enter image description here

Next, you need to adjust the weights:

(3) enter image description here

I took the information from here


I take turns to input spectral portraits of the words "zero", "one", "two", etc. (represent a vector with elements [-1; 1] ). All the words for some reason fall on one neuron. At the same time, the input vectors are quite different.

What could be the problem? Are there any errors in the network algorithm?


UPD: My values ​​in the vector R (vector of distances) are practically the same for all neurons: 237.3019 237.0699 237.0621 237.4326 237.0400 237.3023 237.3323 237.5506 237.1476 237.3318

What can it say? What is bad input?


UPD 2: Implementation Code

 function [index, W] = recognize(W, X, SPEED) % % W - матрица весовых коэффициентов % X - вектор входных параметров % SPEED - коэффициент скорости обучения % % index - номер нейрона-победителя % AMOUNT_NEURON = size(W, 1); % Вычисление RR = zeros(AMOUNT_NEURON, 1); for i = 1:1:AMOUNT_NEURON for j = 1:1:size(W, 2) R(i) = R(i) + (X(j) - W(i, j))^2; end %R(i) = sqrt(R(i)); end % Определение нейрона-победителя [val, i] = min(R); % Коррекция коэффициентов for j = 1:1:size(W, 2) W(i, j) = W(i, j) + SPEED*(X(j) - W(i, j)); end index = i; 

I will go then to conjure with the initial initialization of the scales

  • If you think that the algorithm is wrong, then you can try to take another source. If it does not roll with another source, then it is obvious that there is an error in the implementation. And to finally verify this, you can use the solution from the library. - m9_psy
  • one
    The algorithm is correct. Probably an error in the implementation. Although theoretically this could be with the right algorithm (although it is extremely unlikely) if the neuron weights are unsuccessfully initialized. Try to choose them not randomly, but, for example, equal to individual spectral portraits. - Taras
  • @Taras, then try to choose the weight is not by chance. But so far, after one or two training passes on any vector of input values, it turns out that due to the correction of the weights, one of them starts to differ greatly from the others. So much so that with further input to the input of any vector from the training sample, only this neuron is triggered. - Andrei Kurulev
  • @Taras, really, if the weights matrix is ​​not initialized randomly, but by the vectors of spectral portraits, then everything works as it should. But then the question arises where is the self-learning network here, if at the beginning we set the standards, so to speak. - Andrei Kurulev

1 answer 1

This can be, if the initial weights vectors lie away from the training vectors, as in the picture:

Failed initialization

Then the closest vector of weights will be attracted to the training points, and then all operations will take place only with it.

Indeed, if the weights matrix is ​​initialized not by chance, but by the vectors of spectral portraits, then everything works as it should. But then the question arises where is the self-learning network here, if at the beginning we set the standards, so to speak.

It is assumed that you have a lot of spectral portraits of each type. In the course of self-study, the weight will be transferred to the center of the cluster corresponding to one of the types.

Another option is to avoid the situation shown in the picture: for initialization, consider the average of all spectral portraits, then add random additions to this point.