在训练期间,如何获取在构建时定义的张量的值?

huangapple go评论52阅读模式
英文:

During training, how to get value of a tensor defined during building time?

问题

这个教程中,可以看到作为参数传递给tape.gradient()loss_value变量可以有如下值:

tf.Tensor(0.36069894, shape=(), dtype=float32)

我正在构建类似的东西(代码非常长且复杂,不可能在这里放出),但我访问的损失元素是在构建时计算的(即在训练循环之前),如下所示:

x = Input(shape=(2,*input_shape))
z = SomeLayer(x)
y = SomeOtherLayer(z)
total_loss = TensorOperations(z)
net = Model(x,y)

其中TensorOperations可以是不同的操作,如K.meanK.abs等。

但当我在训练期间打印出这个total_loss变量时:

with tf.GradientTape() as tape:
    net(input_data)
    print(total_loss)

我得到类似以下的内容:

Tensor("truediv_8:0", shape=(?,), dtype=float32)

我的问题是:如何获得一个具有实际值的张量(类似于第一个张量)?

[编辑] 我所指的“真正的”代码要复杂得多,这里是total_loss变量这里是net = Model()的链接。

英文:

From this tutorial, one sees that the loss_value variable given as param to tape.gradient() can have a value like so:

tf.Tensor(0.36069894, shape=(), dtype=float32)

I am constructing something similar (code is very long and complex, not possible to put it here), but the loss element I access to is computed during building time (so, before the training loop), in this fashion:

x = Input(shape=(2,*input_shape))
z = SomeLayer(x)
y = SomeOtherLayer(z)
total_loss = TensorOperations(z)
net = Model(x,y)

Where TensorOperations can be different operations such as K.mean, K.abs etc.

But when I print out this total_loss variable during training:

with tf.GradientTape() as tape:
    net(input_data)
    print(total_loss)

I get something like:

Tensor("truediv_8:0", shape=(?,), dtype=float32)

My question is: how do I get a tensor with an actual value (like the first tensor) ?

[EDIT] the "real" code I am referring to is much more complex, here is the total_loss variable and here is the net = Model()

答案1

得分: 1

最简单的方法可能是将您的损失层包括在模型的输出中:

net = Model(x, [y, total_loss])

然后,在训练循环中:

with tf.GradientTape() as tape:
    y_pred, loss_value = net(input_data)
    print(loss_value)

然而,我建议尝试将损失与模型声明解耦,这样更灵活。

英文:

The easiest is probably to include your loss layer as an output of your model:

net = Model(x,[y, total_loss])

And then, during the training loop:

with tf.GradientTape() as tape:
    y_pred, loss_value = net(input_data)
    print(loss_value)

However, I would advise to try to decouple the loss from the model declaration, it's just more flexible.

huangapple
  • 本文由 发表于 2023年6月12日 16:57:00
  • 转载请务必保留本文链接:https://go.coder-hub.com/76455022.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定