All welcome. I am trying to train my first neuron.

When you try to train her - this error appears:

tensorflow.python.framework.errors_impl.ResourceExhaustedError: OOM when allocating and type: 50 / task: 0 / device: GPU: 0 by allocator

I read, I realized that this is due to the fact that little memory in the video card (GTX 1050 2 gb).

It turns out that I can’t use a video card here at all?

Maybe you can somehow "portions" issue a video card dataset?

Code:

import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv2D, MaxPooling2D from keras import backend as K import numpy as np batch_size = 1 num_classes = 3 epochs = 2 # input image dimensions img_rows, img_cols = 135, 240 dataset = Dataset() x_train, y_train = dataset.LoadDataset() x_train = x_train[0] y_train = y_train[0] x_train = np.array(x_train).reshape(10000, 135, 240, 1) input_shape = (img_rows, img_cols, 1) x_train = x_train.astype('float32') x_train = x_train / 255 model = Sequential() model.add(Conv2D(32, kernel_size=(1, 1), activation='relu', input_shape=input_shape)) model.add(Conv2D(64, (3, 3), activation='relu')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Flatten()) model.add(Dense(128, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(num_classes, activation='softmax')) model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.fit([x_train], [y_train], batch_size=batch_size, epochs=epochs, verbose=1) model.save("First.model") score = model.evaluate([x_train], [y_train], verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1]) 

    1 answer 1

    It seems that part of the video memory is already occupied (perhaps a video driver, CUDA, etc.).

    To see the amount of video memory available for Tensorflow, run the following command:

     sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) 

    This is what Tensorflow shows for my GTX 1070 (8GiB RAM) right after starting iPython :

     In [6]: sess = tf.Session(config=tf.ConfigProto(log_device_placement=True)) 2019-01-29 19:37:05.630989: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX 2019-01-29 19:37:06.219693: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.7845 pciBusID: 0000:05:00.0 totalMemory: 8.00GiB freeMemory: 6.63GiB 2019-01-29 19:37:06.234201: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0 2019-01-29 19:37:08.175263: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2019-01-29 19:37:08.189782: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 2019-01-29 19:37:08.199701: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N 2019-01-29 19:37:08.210621: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6391 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:05:00.0, compute capability: 6.1) Device mapping: /job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:05:00.0, compute capability: 6.1 2019-01-29 19:37:08.243113: I tensorflow/core/common_runtime/direct_session.cc:307] Device mapping: /job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: GeForce GTX 1070, pci bus id: 0000:05:00.0, compute capability: 6.1 

    Pay attention to this line:

     totalMemory: 8.00GiB freeMemory: 6.63GiB