Tensorflow TensorBoard not showing acc, loss, acc_val, and loss_val only only epoch_accuracy and epoch_loss

huangapple go评论55阅读模式
英文:

Tensorflow TensorBoard not showing acc, loss, acc_val, and loss_val only only epoch_accuracy and epoch_loss

问题

我正在寻求在TensorBoard上显示与acc、loss、acc_val和loss_val对应的图表,但出现了某种原因导致它们没有显示。这是我所看到的内容。

我希望看到这个:

我正在按照这里的指导来在Google Colab笔记本中使用TensorBoard。

生成TensorBoard的代码如下:

opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)

tensorboard = TensorBoard(log_dir="logs/{}".format(NAME), 
                          histogram_freq=1, 
                          write_graph=True, 
                          write_grads=True, 
                          batch_size=BATCH_SIZE, 
                          write_images=True)

model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=opt,
    metrics=['accuracy']
)

# 训练模型
history = model.fit(
     train_x, train_y,
     batch_size=BATCH_SIZE,
     epochs=EPOCHS,
     validation_data=(validation_x, validation_y),
     callbacks=[tensorboard]
)

我该如何解决这个问题?有什么建议吗?非常感谢您的帮助!

英文:

I am looking to have a TensorBoard display graphs corresponding to acc, loss, acc_val, and loss_val but they do not appear for some reason. Here is what I am seeing.

Tensorflow TensorBoard not showing acc, loss, acc_val, and loss_val only only epoch_accuracy and epoch_loss

I am looking to have this:
Tensorflow TensorBoard not showing acc, loss, acc_val, and loss_val only only epoch_accuracy and epoch_loss

I am following the instruction here to be able to use tensorboard in a google colab notebook

This is the code used to generate the tensorboard:

opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)

tensorboard = TensorBoard(log_dir="logs/{}".format(NAME), 
                          histogram_freq=1, 
                          write_graph=True, 
                          write_grads=True, 
                          batch_size=BATCH_SIZE, 
                          write_images=True)

model.compile(
    loss='sparse_categorical_crossentropy',
    optimizer=opt,
    metrics=['accuracy']
)

# Train model
history = model.fit(
     train_x, train_y,
     batch_size=BATCH_SIZE,
     epochs=EPOCHS,
     validation_data=(validation_x, validation_y),
     callbacks=[tensorboard]
)

How do I go about solving this issue? Any ideas? Your help is very much appreciated!

答案1

得分: 1

这是预期的行为。如果您想记录自定义标量,比如动态学习率,您需要使用TensorFlow Summary API。

重新训练回归模型并记录自定义学习率。以下是如何操作:

  1. 创建一个文件写入器,使用 tf.summary.create_file_writer()
  2. 定义一个自定义学习率函数。这将传递给Keras的LearningRateScheduler回调函数。
  3. 在学习率函数内部,使用 tf.summary.scalar() 来记录自定义学习率。
  4. LearningRateScheduler回调传递给 Model.fit()

通常情况下,要记录自定义标量,您需要使用tf.summary.scalar()和一个文件写入器。文件写入器负责将数据写入指定目录,当您使用tf.summary.scalar()时会隐式使用它。

logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()

def lr_schedule(epoch):
    """
    返回随着epochs进展而减小的自定义学习率。
    """
    learning_rate = 0.2
    if epoch > 10:
        learning_rate = 0.02
    if epoch > 20:
        learning_rate = 0.01
    if epoch > 50:
        learning_rate = 0.005

    tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
    return learning_rate

lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

model = keras.models.Sequential([
    keras.layers.Dense(16, input_dim=1),
    keras.layers.Dense(1),
])

model.compile(
    loss='mse', # keras.losses.mean_squared_error
    optimizer=keras.optimizers.SGD(),
)

training_history = model.fit(
    x_train, # input
    y_train, # output
    batch_size=train_size,
    verbose=0, # 抑制冗长的输出;使用Tensorboard代替
    epochs=100,
    validation_data=(x_test, y_test),
    callbacks=[tensorboard_callback, lr_callback],
)

以上是如何记录自定义学习率的步骤。

英文:

That's the intended behavior. If you want to log custom scalars such as a dynamic learning rate, you need to use the TensorFlow Summary API.

Retrain the regression model and log a custom learning rate. Here's how:

  1. Create a file writer, using tf.summary.create_file_writer().
  2. Define a custom learning rate function. This will be passed to the Keras LearningRateScheduler callback.
  3. Inside the learning rate function, use tf.summary.scalar() to log the custom learning rate.
  4. Pass the LearningRateScheduler callback to Model.fit().

In general, to log a custom scalar, you need to use tf.summary.scalar() with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the tf.summary.scalar().

logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()

def lr_schedule(epoch):
  """
  Returns a custom learning rate that decreases as epochs progress.
  """
  learning_rate = 0.2
  if epoch > 10:
    learning_rate = 0.02
  if epoch > 20:
    learning_rate = 0.01
  if epoch > 50:
    learning_rate = 0.005

  tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
  return learning_rate

lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

model = keras.models.Sequential([
    keras.layers.Dense(16, input_dim=1),
    keras.layers.Dense(1),
])

model.compile(
    loss='mse', # keras.losses.mean_squared_error
    optimizer=keras.optimizers.SGD(),
)

training_history = model.fit(
    x_train, # input
    y_train, # output
    batch_size=train_size,
    verbose=0, # Suppress chatty output; use Tensorboard instead
    epochs=100,
    validation_data=(x_test, y_test),
    callbacks=[tensorboard_callback, lr_callback],
)

huangapple
  • 本文由 发表于 2020年1月4日 12:02:00
  • 转载请务必保留本文链接:https://go.coder-hub.com/59587769.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定