英文:
How to resize MNIST images without running out of RAM?
问题
我正在尝试预处理我的数据,将训练集图像调整大小为 224 * 224,具有 3 个通道,以将其用作 VGG 16 模型的输入,但我内存不足。如何解决这个问题?
new_size = (224, 224)
new_x_train = []
for image in x_train:
image = x_train[image]
image = tf.constant(image)
image = tf.expand_dims(image, axis=-1)
image = tf.concat([image, image, image], axis=-1)
image = tf.image.resize(image, new_size)
new_x_train.append(image)
new_x_train = tf.stack(new_x_train)
这对单个图像有效。但是,当我尝试使用循环对所有 60000 个图像执行相同操作时,我会耗尽内存。
英文:
I'm trying to preprocess my data to resize the training set images to 224 * 224 with 3 channels to use it as input to VGG 16 model and I'm running out of RAM. How do I resolve this?
new_size = (224,224)
new_x_train = []
for image in x_train:
image = x_train[image]
image = tf.constant(image)
image = tf.expand_dims(image, axis = -1)
image = tf.concat([image, image, image], axis = -1)
image = tf.image.resize(image,new_size)
new_x_train.append(image)
new_x_train = tf.stack(new_x_train)
This works for a single image. However, when i try to do the same thing for all the 60000 images using a loop, I run out of RAM
答案1
得分: 1
您当前的方法会将所有图像加载到内存中,这是低效的。尝试使用Python生成器或使用TensorFlow数据集来动态处理数据。
对于您的当前情况,如果x_train
是一个NumPy数组,可以使用TensorFlow数据集的示例:
new_size = (224, 224)
def resize_image(image):
image = tf.expand_dims(image, axis=-1)
image = tf.repeat(image, 3, axis=-1)
image = tf.image.resize(image, new_size)
x_train_ds = tf.data.Dataset.from_tensor_slices(x_train)
x_train_ds = x_train_ds.map(resize_image)
使用tf.data,resize_image
函数将在每次迭代时被调用,而不是直接加载到内存中。但如果您希望将其直接存储在内存中,您仍然可以通过调用x_train_ds = x_train_ds.cache()
来实现,但如果您的内存有限,我不建议这样做。
此外,我鼓励您从以下链接更详细地学习:
英文:
Your current approach will load all your images into your memory, which is inefficient. Try to learn using the python generator or using TensorFlow Dataset to preprocess your data on the fly.
For your current case, here is the example for using TensorFlow Dataset if your x_train
is a NumPy array:
new_size = (224, 224)
def resize_image(image):
image = tf.expand_dims(image, axis=-1)
image = tf.repeat(image, 3, axis=-1)
image = tf.image.resize(image, new_size)
x_train_ds = tf.data.Dataset.from_tensor_slices(x_train)
x_train_ds = x_train_ds.map(resize_image)
Using tf.data the resize_image
function will be called for every iteration instead of loading all of it directly to your memory. But if you want to store it directly in your memory, you can still do it by calling x_train_ds = x_train_ds.cache()
, but I won't recommend it if you have limited memory.
Furthermore, I encourage you to learn it in more detail from this link:
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论