英文:
how regroup multiple fit calls on a single epoche with keras
问题
以下是翻译的内容:
我正在使用Keras在Go中训练一个模型,但我的计算机无法处理所需的RAM。因此,我尝试将我的训练拆分为多次模型拟合调用,类似于以下方式:
for epoche in range(nbEpoches):
for index_df in range(len(list_of_dataFrames)):
dataFrame = load_dataFrame(list_of_dataFrames, index_df) # 仅加载这个DataFrame到内存
X_train, Y_train, X_test, Y_test = calc_train_arrays(dataFrame)
model.fit(
X_train, Y_train,
validation_data=(X_test, Y_test),
# ... 我要询问的部分
batch_size=batch_size,
)
而且,X_train和X_test是形状为(成千上万,35至200,54+)的NumPy数组,因此必须使用多个批次(为了GPU的VRAM),并且动态加载数据框架(为了RAM),这是迫使我在同一次epoch中使用多个fit调用的原因。
我正在询问如何使用model.fit函数来执行这个操作。
我还在考虑是否使用形状为(batch_size,35+,54+)的数组生成器并指定steps_per_epoch是否是一个好主意?
我最初尝试通过仅训练大约20,000个样本的单个数据框架来解决问题,但模型存在泛化问题。我还尝试过每个数据框架只进行一次epoch,但似乎每个数据框架都在遗忘其他数据框架的内容。
英文:
I am training a model with kearas on Go of datas, at a point where my computer can't handle the RAM needed. So I am trying to implement my training as 1 epoche is done with multiple model.fit calls, with somthing like :
for epoche in range(nbEpoches):
for index_df in range(len(list_of_dataFrames)):
dataFrame = load_dataFrame(list_of_dataFrames, index_df) # load in ram only this DF
X_train, Y_train, X_test, Y_test = calc_train_arrays(dataFrame)
model.fit(
X_train, Y_train,
validation_data=(X_test, Y_test),
# ... what I am asking
batch_size=batch_size,
)
and with X_train and X_test are numpy arrays of shape (many thousands, 35 to 200, 54+), so using multiple batches is mandatory (for the GPU's VRAM), and dynamicly loading the dataFrames too (for the RAM), this is what force me to use multiple fit calls for the same epoche.
I am asking how to use the model.fit function in order to do it.
i also wondered if using a generator of array of shape (batch_size, 35+, 54+) and specifing steps_per_epoch could be an idea ?
i have first tryed to avoid the problem by just training on a single dataFrame of around 20k samples, but the model is having generalisation issue. I also tryed to just do one epoche per dataframe, but it seems like each dataframe was un-learning the others.
答案1
得分: 0
你应该使用 fit_generator
而不是 fit
。它会根据需要加载示例,而不是一次性加载它们。如果你对Python 2有所了解,这就像是 xrange
与 range
之间的区别,range
会创建一个列表并将其加载到RAM中,而 xrange
会创建一个生成器,它更节省内存。在Python 3中,默认情况下,range
现在采用了 xrange
的行为。
https://faroit.com/keras-docs/1.2.0/models/model/
另外,只是一条提醒,我在刚开始对机器学习感兴趣时不知道这一点,但现在 Keras 已经合并到了 TensorFlow 中,而 Caffe 已经合并到了 PyTorch 中。Keras 和 Caffe 可能被认为是较老的工具,可能不会像 PyTorch 或 TensorFlow 那样经常更新。个人建议在这两者之间选择 PyTorch,因为TensorFlow归属于Google,而PyTorch更加秉承开源精神。
英文:
You should use fit_generator
instead of fit
. It loads the examples as needed instead of loading them all at once. If you're familiar at all with Python 2, it's like the difference between xrange
and range
, range
creates a list and puts it in your ram whereas xrange
would create a generator, it's much more memory efficient. range
now defaults to the xrange
behavior in Python 3.
https://faroit.com/keras-docs/1.2.0/models/model/
Also just a PSA, I didn't know this when I first got interested in ML, but Keras is now Tensorflow, and Caffe is now Pytorch. Keras and Caffe may be considered older tools nowadays, and may not receive updates as frequently as Pytorch or Tensorflow. Personally I recommend Pytorch out of the two, since Tensorflow is owned by Google and Pytorch has a little more of an open-source spirit to it.
答案2
得分: 0
我猜你有两个选择。
-
你可以尝试使用自定义数据生成器。这里有一个教程(我认为这可能有点困难):https://medium.com/analytics-vidhya/write-your-own-custom-data-generator-for-tensorflow-keras-1252b64e41c3
-
你还可以定义一个自定义的训练循环,这里有一个教程:https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch
我不确定这是否符合你的要求。
英文:
I guess you have 2 options.
-
You can try a custom data generator. Here is an tutorial (i think this may be a little difficult):
https://medium.com/analytics-vidhya/write-your-own-custom-data-generator-for-tensorflow-keras-1252b64e41c3 -
You can also define a custom training loop, here is a tutorial: https://www.tensorflow.org/guide/keras/writing_a_training_loop_from_scratch
I am not sure if this is what you want.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论