Understanding convolutional networks in Python 3.5 + TensorFlow + TFLearn. The code works here , but out of 55,000 examples, only every 64th uses for learning. In addition, for each new era, the same data is used - every 64th.

How to reduce the iteration step so that the network uses every first example? If this can not be done, how in each new era to use other data, and not the same?

Sample training messages

Training Step: 1 | time: 2.416s | Adam | epoch: 001 | loss: 0.00000 -- iter: 064/55000 Training Step: 2 | total loss: 0.24470 | time: 4.621s | Adam | epoch: 001 | loss: 0.24470 -- iter: 128/55000 Training Step: 3 | total loss: 0.10852 | time: 6.876s | Adam | epoch: 001 | loss: 0.10852 -- iter: 192/55000 Training Step: 4 | total loss: 0.20421 | time: 9.129s | Adam | epoch: 001 | loss: 0.20421 -- iter: 256/55000 
  • Your source code is very large and, besides, it is not made according to PEP8. Try to isolate the problem - instead of taking real data, take a completely trivial minimal example (some made-up generated matrixes instead of images) so that everyone can reproduce your problem. Maybe at this point the problem will be solved by itself. - m9_psy
  • @ m9_psy To reproduce the problem, you can run this code that will work for everyone. When you run it, the problem is similar. - segrnegr

1 answer 1

In fact, the network uses each image, but displays a message about training only for each 64. This parameter is regulated by batch_size in model.fit . By default, it is 64. In order to display every first message, you need to change the code as follows:

  model.fit({'input': X}, {'target': Y}, n_epoch=20, validation_set=({'input': testX}, {'target': testY}), snapshot_step=100, batch_size=1, show_metric=True,run_id='convnet_mnist')