英文:
Tensorflow TensorBoard not showing acc, loss, acc_val, and loss_val only only epoch_accuracy and epoch_loss
问题
我正在寻求在TensorBoard上显示与acc、loss、acc_val和loss_val对应的图表,但出现了某种原因导致它们没有显示。这是我所看到的内容。
我希望看到这个:
我正在按照这里的指导来在Google Colab笔记本中使用TensorBoard。
生成TensorBoard的代码如下:
opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)
tensorboard = TensorBoard(log_dir="logs/{}".format(NAME),
histogram_freq=1,
write_graph=True,
write_grads=True,
batch_size=BATCH_SIZE,
write_images=True)
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy']
)
# 训练模型
history = model.fit(
train_x, train_y,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(validation_x, validation_y),
callbacks=[tensorboard]
)
我该如何解决这个问题?有什么建议吗?非常感谢您的帮助!
英文:
I am looking to have a TensorBoard display graphs corresponding to acc, loss, acc_val, and loss_val but they do not appear for some reason. Here is what I am seeing.
I am following the instruction here to be able to use tensorboard in a google colab notebook
This is the code used to generate the tensorboard:
opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)
tensorboard = TensorBoard(log_dir="logs/{}".format(NAME),
histogram_freq=1,
write_graph=True,
write_grads=True,
batch_size=BATCH_SIZE,
write_images=True)
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy']
)
# Train model
history = model.fit(
train_x, train_y,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
validation_data=(validation_x, validation_y),
callbacks=[tensorboard]
)
How do I go about solving this issue? Any ideas? Your help is very much appreciated!
答案1
得分: 1
这是预期的行为。如果您想记录自定义标量,比如动态学习率,您需要使用TensorFlow Summary API。
重新训练回归模型并记录自定义学习率。以下是如何操作:
- 创建一个文件写入器,使用
tf.summary.create_file_writer()
。 - 定义一个自定义学习率函数。这将传递给Keras的
LearningRateScheduler
回调函数。 - 在学习率函数内部,使用
tf.summary.scalar()
来记录自定义学习率。 - 将
LearningRateScheduler
回调传递给Model.fit()
。
通常情况下,要记录自定义标量,您需要使用tf.summary.scalar()
和一个文件写入器。文件写入器负责将数据写入指定目录,当您使用tf.summary.scalar()
时会隐式使用它。
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
"""
返回随着epochs进展而减小的自定义学习率。
"""
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # 抑制冗长的输出;使用Tensorboard代替
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
以上是如何记录自定义学习率的步骤。
英文:
That's the intended behavior. If you want to log custom scalars such as a dynamic learning rate, you need to use the TensorFlow Summary API.
Retrain the regression model and log a custom learning rate. Here's how:
- Create a file writer, using
tf.summary.create_file_writer()
. - Define a custom learning rate function. This will be passed to the Keras
LearningRateScheduler
callback. - Inside the learning rate function, use
tf.summary.scalar()
to log the custom learning rate. - Pass the
LearningRateScheduler
callback toModel.fit()
.
In general, to log a custom scalar, you need to use tf.summary.scalar()
with a file writer. The file writer is responsible for writing data for this run to the specified directory and is implicitly used when you use the tf.summary.scalar()
.
logdir = "logs/scalars/" + datetime.now().strftime("%Y%m%d-%H%M%S")
file_writer = tf.summary.create_file_writer(logdir + "/metrics")
file_writer.set_as_default()
def lr_schedule(epoch):
"""
Returns a custom learning rate that decreases as epochs progress.
"""
learning_rate = 0.2
if epoch > 10:
learning_rate = 0.02
if epoch > 20:
learning_rate = 0.01
if epoch > 50:
learning_rate = 0.005
tf.summary.scalar('learning rate', data=learning_rate, step=epoch)
return learning_rate
lr_callback = keras.callbacks.LearningRateScheduler(lr_schedule)
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)
model = keras.models.Sequential([
keras.layers.Dense(16, input_dim=1),
keras.layers.Dense(1),
])
model.compile(
loss='mse', # keras.losses.mean_squared_error
optimizer=keras.optimizers.SGD(),
)
training_history = model.fit(
x_train, # input
y_train, # output
batch_size=train_size,
verbose=0, # Suppress chatty output; use Tensorboard instead
epochs=100,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback, lr_callback],
)
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论