Hello. I tried to implement this algorithm http://robocraft.ru/blog/algorithm/560.html, but when I conducted network training, I found out that the output data is not even close to reality. Where did I make a mistake? Or the algorithm for the link is wrong? Help me to understand.

double OpenNNL::_changeWeightsByBP(double * trainingInputs, double *trainingOutputs, double speed, double sample_weight) { double * localGradients = new double[_neuronsCount]; double * outputs = new double[_neuronsCount]; double * derivatives = new double[_neuronsCount]; calculateNeuronsOutputsAndDerivatives(trainingInputs, outputs, derivatives); for(int j=0;j<_neuronsPerLayerCount[_layersCount-1];j++) { localGradients[indexByLayerAndNeuron(_layersCount-1, j)] = trainingOutputs[j] - outputs[indexByLayerAndNeuron(_layersCount-1, j)]; } if(_layersCount > 1) { for(int i=_layersCount-2;i>=0;i--) { for(int j=0;j<_neuronsPerLayerCount[i];j++) { localGradients[indexByLayerAndNeuron(i, j)] = 0; for(int k=0;k<_neuronsPerLayerCount[i+1];k++) { localGradients[indexByLayerAndNeuron(i, j)] += _neuronsInputsWeights[indexByLayerNeuronAndInput(i+1, k, j)] * localGradients[indexByLayerAndNeuron(i+1, k)]; } } } } for(int j=0;j<_neuronsPerLayerCount[0];j++) { for(int k=0;k<_inputsCount;k++) { _neuronsInputsWeights[indexByLayerNeuronAndInput(0, j, k)] += speed * localGradients[indexByLayerAndNeuron(0, j)] * derivatives[indexByLayerAndNeuron(0, j)] * trainingInputs[k]; } } for(int i=1;i<_layersCount;i++) { for(int j=0;j<_neuronsPerLayerCount[i];j++) { for(int k=0;k<_neuronsPerLayerCount[i-1];k++) { _neuronsInputsWeights[indexByLayerNeuronAndInput(i, j, k)] += speed * localGradients[indexByLayerAndNeuron(i, j)] * derivatives[indexByLayerAndNeuron(i, j)] * outputs[indexByLayerAndNeuron(i, j)]; } } } delete[] localGradients; delete[] outputs; delete[] derivatives; } 

And that algorithm does not say how to adjust the displacement of neurons. Can someone tell me how to do this?

If you need the complete code, it is here: https://github.com/NicholasShatokhin/OpenNNL

  • In the tags indicate the PL, which was used - teanYCH

3 answers 3

Oh, I found a mistake. In the last loop, you needed instead of outputs [indexByLayerAndNeuron (i, j)]; write: outputs [indexByLayerAndNeuron (i-1, k)];

All the troubles of inattention.

    The synaptic weighting table for localGradients should be _neuronsCount * _neuronsCount.

      I've learned this algorithm for the resource that you provided. So, you probably "did not notice" that the second part is entirely devoted to detecting errors in the output signal Y using the method of back propagation of an error .

       Π’Ρ‹Ρ…ΠΎΠ΄Π½ΠΎΠΉ сигнал сСти y сравниваСтся с ΠΆΠ΅Π»Π°Π΅ΠΌΡ‹ΠΌ Π²Ρ‹Ρ…ΠΎΠ΄Π½Ρ‹ΠΌ сигналом z, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ хранится Π² Ρ‚Ρ€Π΅Π½ΠΈΡ€ΠΎΠ²ΠΎΡ‡Π½Ρ‹Ρ… Π΄Π°Π½Π½Ρ‹Ρ…. Π Π°Π·Π½ΠΈΡ†Π° ΠΌΠ΅ΠΆΠ΄Ρƒ этими двумя сигналами называСтся ошибкой d Π²Ρ‹Ρ…ΠΎΠ΄Π½ΠΎΠ³ΠΎ слоя сСти. 

      Well, then, after that, a neuron is detected (or a bundle of neurons of the same level) where an error occurred ( ΠΏΠΎ ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŒΠ½ΠΎΠΉ суммС, вычислСнной Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠ΅ΠΉ f(x) ΠΈΠ»ΠΈ ΠΏΠΎ суммам ΠΊΠΎΠ½Ρ‚Ρ€ΠΎΠ»ΡŒΠ½Ρ‹Ρ… сумм, Ссли ΠΈΠΌΠ΅Π΅ΠΌ Π΄Π΅Π»ΠΎ со связкой ). Detection occurs propagation of the error signal d (calculated in the learning step) back to all neurons whose output signals were incoming for the last neuron. Actually, therefore, the algorithm is called so.

      • And what exactly did I not notice? - Robotex