Using a custom parameter in train_step() method of a VAE, which is different for each epoch

huangapple go评论66阅读模式
英文:

Using a custom parameter in train_step() method of a VAE, which is different for each epoch

问题

我正在尝试构建一个类似于这里的VAE示例:https://keras.io/examples/generative/vae/
但是,我在模型中使用了自定义损失,我希望这个损失可以乘以一个随着迭代次数增加而增加的因子。我在开始时这样定义了:

def __init__(self, encoder, decoder, **kwargs):
    self.eloss_weight = tf.Variable(initial_value=args.eloss_weight, trainable=False)

编译模型以运行eagerly:

vae.compile(optimizer=tf.keras.optimizers.Adam(jit_compile=False), run_eagerly=True)

在拟合模型时,我使用了一个回调来改变能量损失权重的值,类似于这样:

def eloss_weight_increase(epoch, logs):
    vae.eloss_weight = vae.eloss_weight + 1

increase_eloss_weight = tf.keras.callbacks.LambdaCallback(on_epoch_end=eloss_weight_increase)

vae.fit(
    X_train, V_train, batch_size=args.batch_size, epochs=args.epochs,
    callbacks=[increase_eloss_weight], verbose=1,)

我不得不使用run_eagerly,因为如果不使用它,在train_step中,当参数被称为self.eloss_weight时,它不会检查它是否已更改或不变(我猜为了优化模型尽可能快地运行,当test_step被编译时,它只记住了确切的值,并停止将其视为变量)。然而,由于run_eagerly,现在运行速度慢了4倍(这相当糟糕,之前模型训练需要4.5小时,现在几乎需要一天)。有没有办法在不使用run_eagerly=True的情况下完成这个任务?

非常感谢。

英文:

I am trying to build a VAE, similar to the example here: https://keras.io/examples/generative/vae/
However, I am employing a custom loss in the model and I would like this loss to be multiplied by a factor that increases with progressing epochs. I did it by defining at the beginning

def __init__(self, encoder, decoder, **kwargs):
    self.eloss_weight = tf.Variable(initial_value=args.eloss_weight, trainable=False)

compiled the model to run eagerly

vae.compile(optimizer=tf.keras.optimizers.Adam(jit_compile=False), run_eagerly=True)

and when fitting I change the value of the energy loss weight using a callback like this

def eloss_weight_increase(epoch, logs):
    vae.eloss_weight = vae.eloss_weight + 1


increase_eloss_weight = tf.keras.callbacks.LambdaCallback(on_epoch_end=eloss_weight_increase)

vae.fit(
    X_train, V_train, batch_size=args.batch_size, epochs=args.epochs,
    callbacks=[increase_eloss_weight], verbose=1,)

I had to use run_eagerly, because if I did not, in train_step, when the parameter was called as self.eloss_weight, it did not look if it was changed or not (I guess in order optimize the model to be as fast as possible, when test_step was compiled, it just remembered the exact value and stopped thinking of it as a variable). However, because of run_eagerly it now runs 4 times slower (which is pretty bad, before this, the model took 4.5 hours to train, now it is almost a day). Is there any way to do this thing without using run_eagerly=True?

Thank you very much.

答案1

得分: 1

vae.eloss_weight = vae.eloss_weight + 1 用一个 Tensor 覆盖了一个 Variable(不一样)。您应该使用变量的 assign 方法来保持 Variable 对象不变:

vae.eloss_weight.assign(vae.eloss_weight + 1.)

或者

vae.eloss_weight.assign_add(1.)

我以前自己使用过这个方法,所以应该可以工作(而且不需要启用 eager execution)。

英文:

vae.eloss_weight = vae.eloss_weight + 1 overwrites a Variable by a Tensor (not the same). You should use a variable's assign method to keep the Variable object intact:

vae.eloss_weight.assign(vae.eloss_weight + 1.)

or

vae.eloss_weight.assign_add(1.)

I have used this myself before, so it should work (also without eager execution).

huangapple
  • 本文由 发表于 2023年7月11日 01:16:26
  • 转载请务必保留本文链接:https://go.coder-hub.com/76655950.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定