val_loss 在训练我的 keras.Model 在 cifar10 数据集上时保持在 0。

huangapple go评论72阅读模式
英文:

val_loss is stuck to 0 while training my keras.Model on cifar10 dataset

问题

I am trying to fit a variational autoencoder on Cifar10 dataset and I want to print both losses on training and validation data.

Unfortunately, even though I set validation_data=(x_val, y_val) as parameter of fit method, it seems that val_total_loss (and other losses such as val_reconstruction_loss and val_kl_loss) is stuck to 0 while training.

This is the full reproducible example:

class VAE(keras.Model):
    # ... (code truncated for brevity)
    
class Encoder(keras.Model):
    # ... (code truncated for brevity)

class Decoder(keras.Model):
    # ... (code truncated for brevity)

latent_dimension = 100
encoder = Encoder(latent_dimension)
decoder = Decoder(latent_dimension)

# Load the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() 

# Normalize the input data
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0

# Split the data into training, validation, and test sets
validation_size = 0.2  # 20% of the training data will be used for validation
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=validation_size)

vae = VAE(encoder, decoder)
vae.compile(optimizer=Adam())
epochs = 2
batch_size = 128
history = vae.fit(x_train, epochs=epochs, batch_size=batch_size, validation_data=(x_val, y_val))

whose output is:

Epoch 1/2
313/313 [==============================] - 32s 97ms/step - loss: 706.5906 - reconstruction_loss: 706.1145 - kl_loss: 0.0157 - val_total_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00
Epoch 2/2
313/313 [==============================] - 30s 95ms/step - loss: 705.7831 - reconstruction_loss: 690.2925 - kl_loss: 7.6490 - val_total_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00

I thought that the problem concerned the fact that history.validation_data is None even though I properly set validation_data parameter into fit method, as in my previous question but the problem seems not to be correlated to this fact.

Some help?

Note that tensorflow version is 2.13.0rc1 and I am using Python 3.11 as interpreter.

英文:

I am trying to fit a variational autoencoder on Cifar10 dataset and I want to print both losses on training and validation data.

Unfortunately, even though I set validation_data=(x_val, y_val) as parameter of fit method, it seems that val_total_loss (and other losses such as val_reconstruction_loss and val_kl_loss) is stuck to 0 while training.

This is the full reproducible example:

class VAE(keras.Model):
    def __init__(self, encoder, decoder, **kwargs):
        super().__init__(**kwargs)
        self.encoder = encoder
        self.decoder = decoder
        self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
        self.reconstruction_loss_tracker = keras.metrics.Mean(name="reconstruction_loss")
        self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")

    @property
    def metrics(self):
        return [
            self.total_loss_tracker,
            self.reconstruction_loss_tracker,
            self.kl_loss_tracker,
        ]

    def train_step(self, data):
        with tf.GradientTape() as tape:
            z_mean, z_log_var, z = self.encoder(data)
            reconstruction = self.decoder(z)
            reconstruction_loss = tf.reduce_mean(
                tf.reduce_sum(
                    keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2)
                )
            )
            kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var))
            kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
            total_loss = reconstruction_loss + kl_loss

        grads = tape.gradient(total_loss, self.trainable_weights)
        self.optimizer.apply_gradients(zip(grads, self.trainable_weights))

        self.total_loss_tracker.update_state(total_loss)
        self.reconstruction_loss_tracker.update_state(reconstruction_loss)
        self.kl_loss_tracker.update_state(kl_loss)

        return {
            "loss": self.total_loss_tracker.result(),
            "reconstruction_loss": self.reconstruction_loss_tracker.result(),
            "kl_loss": self.kl_loss_tracker.result(),
        }

    def call(self, inputs, training=None, mask=None):
        _, _, z = encoder(inputs)
        outputs = decoder(z)
        return outputs


class Encoder(keras.Model):
    def __init__(self, latent_dimension):
        super(Encoder, self).__init__()
        self.latent_dim = latent_dimension
        self.conv_block1 = keras.Sequential([
            layers.Conv2D(filters=64, kernel_size=3, activation="relu", strides=2, padding="same"),
            layers.BatchNormalization()
        ])
        self.conv_block2 = keras.Sequential([
            layers.Conv2D(filters=128, kernel_size=3, activation="relu", strides=2, padding="same"),
            layers.BatchNormalization()
        ])
        self.conv_block3 = keras.Sequential([
            layers.Conv2D(filters=256, kernel_size=3, activation="relu", strides=2, padding="same"),
            layers.BatchNormalization()
        ])
        self.flatten = layers.Flatten()
        self.dense = layers.Dense(units=100, activation="relu")
        self.z_mean = layers.Dense(latent_dimension, name="z_mean")
        self.z_log_var = layers.Dense(latent_dimension, name="z_log_var")
        self.sampling = sample

    def call(self, inputs, training=None, mask=None):
        x = self.conv_block1(inputs)
        x = self.conv_block2(x)
        x = self.conv_block3(x)
        x = self.flatten(x)
        x = self.dense(x)
        z_mean = self.z_mean(x)
        z_log_var = self.z_log_var(x)
        z = self.sampling(z_mean, z_log_var)
        return z_mean, z_log_var, z


class Decoder(keras.Model):
        super(Decoder, self).__init__()
        self.latent_dim = latent_dimension
        self.dense1 = keras.Sequential([
            layers.Dense(units=100, activation="relu"),
            layers.BatchNormalization()
        ])
        self.dense2 = keras.Sequential([
            layers.Dense(units=1024, activation="relu"),
            layers.BatchNormalization()
        ])
        self.dense3 = keras.Sequential([
            layers.Dense(units=4096, activation="relu"),
            layers.BatchNormalization()
        ])
        self.reshape = layers.Reshape((4, 4, 256))
        self.deconv1 = keras.Sequential([
            layers.Conv2DTranspose(filters=256, kernel_size=3, activation="relu", strides=2, padding="same"),
            layers.BatchNormalization()
        ])
        self.deconv2 = keras.Sequential([
            layers.Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=1, padding="same"),
            layers.BatchNormalization()
        ])
        self.deconv3 = keras.Sequential([
            layers.Conv2DTranspose(filters=128, kernel_size=3, activation="relu", strides=2, padding="same"),
            layers.BatchNormalization()
        ])
        self.deconv4 = keras.Sequential([
            layers.Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=1, padding="same"),
            layers.BatchNormalization()
        ])
        self.deconv5 = keras.Sequential([
            layers.Conv2DTranspose(filters=64, kernel_size=3, activation="relu", strides=2, padding="same"),
            layers.BatchNormalization()
        ])
        self.deconv6 = layers.Conv2DTranspose(filters=3, kernel_size=3, activation="sigmoid", padding="same")

    def call(self, inputs, training=None, mask=None):
        x = self.dense1(inputs)
        x = self.dense2(x)
        x = self.dense3(x)
        x = self.reshape(x)
        x = self.deconv1(x)
        x = self.deconv2(x)
        x = self.deconv3(x)
        x = self.deconv4(x)
        x = self.deconv5(x)
        decoder_outputs = self.deconv6(x)
        return decoder_outputs

latent_dimension = 100
encoder = Encoder(latent_dimension)
decoder = Decoder(latent_dimension)

# Load the CIFAR-10 dataset
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data() 

# Normalize the input data
x_train = x_train.astype("float32") / 255.0
x_test = x_test.astype("float32") / 255.0

# Split the data into training, validation, and test sets
validation_size = 0.2  # 20% of the training data will be used for validation
x_train, x_val, y_train, y_val = train_test_split(x_train, y_train, test_size=validation_size)

vae = VAE(encoder, decoder)
vae.compile(optimizer=Adam())
epochs = 2
batch_size = 128
history = vae.fit(x_train, epochs=epochs, batch_size=batch_size, validation_data=(x_val, y_val))

whose output is:

Epoch 1/2
313/313 [==============================] - 32s 97ms/step - loss: 706.5906 - reconstruction_loss: 706.1145 - kl_loss: 0.0157 - val_total_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00
Epoch 2/2
313/313 [==============================] - 30s 95ms/step - loss: 705.7831 - reconstruction_loss: 690.2925 - kl_loss: 7.6490 - val_total_loss: 0.0000e+00 - val_reconstruction_loss: 0.0000e+00 - val_kl_loss: 0.0000e+00

I thought that the problem concerned the fact that history.validation_data is None even though I properly set validation_data parameter into fit method, as in my previous question but the problem seems not to be correlated to this fact.

Some help?

Note that tensorflow version is 2.13.0rc1 and I am using Python 3.11 as interpreter.

答案1

得分: 0

"如xdurch0在他们的评论中提到的,我不得不实现test_step方法。"

英文:

As xdurch0 noticed in their comment, I had to implement test_step method.

huangapple
  • 本文由 发表于 2023年7月3日 02:32:24
  • 转载请务必保留本文链接:https://go.coder-hub.com/76600274.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定