Tensorflow: Custom MaxPooling layer – shape=(None, 64, 28, 28), dtype=float32)) could not be lifted out of a `tf.function`

huangapple go评论111阅读模式
英文:

Tensorflow: Custom MaxPooling layer - shape=(None, 64, 28, 28), dtype=float32)) could not be lifted out of a `tf.function`

问题

我正在尝试创建一个自定义的最大池化层,步幅为2,池化大小为2,并使用“same”填充,但我遇到了以下错误:

ValueError: Argument `initial_value` (Tensor("custom_max_pooling2d/zeros:0", shape=(None, 64, 28, 28), dtype=float32)) could not be lifted out of a `tf.function`. (Tried to create variable with name='None'). To avoid this error, when constructing `tf.Variable`s inside of `tf.function` you can create the `initial_value` tensor in a `tf.init_scope` or pass a callable `initial_value` (e.g., `tf.Variable(lambda : tf.truncated_normal([10, 40]))`). Please file a feature request if this restriction inconveniences you.

Call arguments received by layer "custom_max_pooling2d" (type CustomMaxPooling2D):
• inputs=tf.Tensor(shape=(None, 28, 28, 64), dtype=float32)

似乎批量大小为128(在fit方法中指定)被忽略了。对我来说,保持低级的最大池化功能而不仅仅调用tensorflow函数是很重要的。完整的代码如下。除了CustomMaxPooling2D类之外,一切都正常工作。

import tensorflow as tf

fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train_full, y_train_full), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_full[:5000]/255.0, x_train_full[5000:]/255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
x_test = x_test/255.0

tensorboard_writer = tf.summary.create_file_writer('tensorboard/')

class ValidationLossCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs=None):
        with tensorboard_writer.as_default():
            val_loss = logs['val_loss']
            tf.summary.scalar('val_loss', val_loss, step=epoch)
            tensorboard_writer.flush()

loss = tf.keras.losses.SparseCategoricalCrossentropy()
opt = tf.keras.optimizers.Adam()
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=25, restore_best_weights=True)

class CustomMaxPooling2D(tf.keras.layers.Layer):
    def __init__(self):
        super(CustomMaxPooling2D, self).__init__()
        self.padding_format = tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]])
        self.transpose_format = tf.constant([0, 3, 1, 2])
        self.reverse_transpose_format = tf.constant([0, 2, 3, 1])
        self.batch_size = tf.Variable(128, dtype=tf.int32)
    def build(self, input_shape):
        _, self.height, self.width, self.channels = input_shape
        self.output_width = tf.floor((self.height-1)/2)+1
    def call(self, inputs):
        self.out= tf.Variable(tf.zeros(shape=(self.batch_size, self.channels, self.width, self.width)))
        if self.width%self.output_width!=0:
            inputs = tf.pad(tensor=inputs, paddings=self.padding_format, mode='constant')
        inputs = tf.transpose(inputs, perm=self.transpose_format)
        for b in range(self.batch_size):
            for c in range(self.channels):
                input_data = inputs[b][c]
                for w in range(self.output_width):
                    for h in range(self.output_width):
                        self.out[int(b),int(c),int(w),int(h)].assign(tf.reduce_max(input_data[int(w)*2:(int(w)+1)*2, int(h)*2:(int(h)+1)*2]))
        return tf.transpose(self.out, perm=self.reverse_transpose_format)

model = tf.keras.models.Sequential([
    tf.keras.layers.Conv2D(64,7,activation="relu",padding="same",input_shape=[28,28,1]),
    CustomMaxPooling2D(),
    tf.keras.layers.Conv2D(128,3,activation="relu",padding="same"),
    tf.keras.layers.Conv2D(128,3,activation="relu",padding="same"),
    CustomMaxPooling2D(),
    tf.keras.layers.Conv2D(256,3,activation="relu",padding="same"),
    tf.keras.layers.Conv2D(256,3,activation="relu",padding="same"),
    CustomMaxPooling2D(),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation="relu"),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(64, activation="relu"),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(10, activation="softmax")
])

model.compile(optimizer=opt, loss=loss)

model.fit(x_train, y_train, batch_size=128, epochs=10000,shuffle=True, validation_data=(x_valid, y_valid), callbacks=[early_stopping, ValidationLossCallback()])

predictions = model.predict(x_test)

# Step 5: Evaluate the predictions
accuracy = tf.keras.metrics.Accuracy()
accuracy.update_state(tf.argmax(predictions, axis=1), y_test)
accuracy_result = accuracy.result().numpy()

print("Accuracy: {:.2f}%".format(accuracy_result * 100))

model.save_weights('models/maxpoolmodel.h5')

任何建议和指导都将不胜感激。

英文:

I'm attempting to create a custom max pooling layer with a stride of 2, pool size of 2, and "same" padding, but I'm getting the following error:

>ValueError: Argument initial_value (Tensor("custom_max_pooling2d/zeros:0", shape=(None, 64, 28, 28), dtype=float32)) could not be lifted out of a tf.function. (Tried to create variable with name='None'). To avoid this error, when constructing tf.Variables inside of tf.function you can create the initial_value tensor in a tf.init_scope or pass a callable initial_value (e.g., tf.Variable(lambda : tf.truncated_normal([10, 40]))). Please file a feature request if this restriction inconveniences you.Call arguments received by layer "custom_max_pooling2d" (type CustomMaxPooling2D):
• inputs=tf.Tensor(shape=(None, 28, 28, 64), dtype=float32)

It's as though the batch size of 128 (specified in the fit method) is being ignored. It's important to me to keep the low level max pooling functionality as opposed to just calling a tensorflow function. The full code is as follows. Everything works except for the CustomMaxPooling2D class.

import tensorflow as tf
fashion_mnist = tf.keras.datasets.fashion_mnist
(x_train_full, y_train_full), (x_test, y_test) = fashion_mnist.load_data()
x_valid, x_train = x_train_full[:5000]/255.0, x_train_full[5000:]/255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
x_test = x_test/255.0
tensorboard_writer = tf.summary.create_file_writer('tensorboard/')
class ValidationLossCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
with tensorboard_writer.as_default():
val_loss = logs['val_loss']
tf.summary.scalar('val_loss', val_loss, step=epoch)
tensorboard_writer.flush()
loss = tf.keras.losses.SparseCategoricalCrossentropy()
opt = tf.keras.optimizers.Adam()
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=25, restore_best_weights=True)
class CustomMaxPooling2D(tf.keras.layers.Layer):
def __init__(self):
super(CustomMaxPooling2D, self).__init__()
self.padding_format = tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]])
self.transpose_format = tf.constant([0, 3, 1, 2])
self.reverse_transpose_format = tf.constant([0, 2, 3, 1])
self.batch_size = tf.Variable(128, dtype=tf.int32)
def build(self, input_shape):
_, self.height, self.width, self.channels = input_shape
self.output_width = tf.floor((self.height-1)/2)+1
def call(self, inputs):
self.out= tf.Variable(tf.zeros(shape=(self.batch_size, self.channels, self.width, self.width)))
if self.width%self.output_width!=0:
inputs = tf.pad(tensor=inputs, paddings=self.padding_format, mode='constant')
inputs = tf.transpose(inputs, perm=self.transpose_format)
for b in range(self.batch_size):
for c in range(self.channels):
input_data = inputs[b][c]
for w in range(self.output_width):
for h in range(self.output_width):
self.out[int(b),int(c),int(w),int(h)].assign(tf.reduce_max(input_data[int(w)*2:(int(w)+1)*2, int(h)*2:(int(h)+1)*2]))
return tf.transpose(self.out, perm=self.reverse_transpose_format)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(64,7,activation="relu",padding="same",input_shape=[28,28,1]),
CustomMaxPooling2D(),
tf.keras.layers.Conv2D(128,3,activation="relu",padding="same"),
tf.keras.layers.Conv2D(128,3,activation="relu",padding="same"),
CustomMaxPooling2D(),
tf.keras.layers.Conv2D(256,3,activation="relu",padding="same"),
tf.keras.layers.Conv2D(256,3,activation="relu",padding="same"),
CustomMaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(10, activation="softmax")
])
model.compile(optimizer=opt, loss=loss)
model.fit(x_train, y_train, batch_size=128, epochs=10000,shuffle=True, validation_data=(x_valid, y_valid), callbacks=[early_stopping, ValidationLossCallback()])
predictions = model.predict(x_test)
# Step 5: Evaluate the predictions
accuracy = tf.keras.metrics.Accuracy()
accuracy.update_state(tf.argmax(predictions, axis=1), y_test)
accuracy_result = accuracy.result().numpy()
print("Accuracy: {:.2f}%".format(accuracy_result * 100))
model.save_weights('models/maxpoolmodel.h5')

Any advice/guidance would be appreciated.

答案1

得分: 0

起初我以为是batch_size出了问题,因为那是错误的输出。在运行您的代码后,我注意到self.out出了问题(它使用batch_size作为参数)。

self.out需要移动到build函数中:

class CustomMaxPooling2D(tf.keras.layers.Layer):
    def __init__(self):
        super(CustomMaxPooling2D, self).__init__()
        self.padding_format = tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]])
        self.transpose_format = tf.constant([0, 3, 1, 2])
        self.reverse_transpose_format = tf.constant([0, 2, 3, 1])
        self.batch_size = tf.Variable(128, dtype=tf.int32)
        
    def build(self, input_shape):
        _, self.height, self.width, self.channels = input_shape
        self.output_width = tf.floor((self.height-1)/2)+1
        self.out = tf.Variable(tf.zeros(shape=(self.batch_size, self.channels, self.width, self.width)))       
      
    def call(self, inputs):
        if self.width % self.output_width != 0:
            inputs = tf.pad(tensor=inputs, paddings=self.padding_format, mode='constant')
        inputs = tf.transpose(inputs, perm=self.transpose_format)
        for b in range(self.batch_size):
            for c in range(self.channels):
                input_data = inputs[b][c]
                for w in range(self.output_width):
                    for h in range(self.output_width):
                        self.out[int(b),int(c),int(w),int(h)].assign(tf.reduce_max(input_data[int(w)*2:(int(w)+1)*2, int(h)*2:(int(h)+1)*2]))
        return tf.transpose(self.out, perm=self.reverse_transpose_format)
英文:

Initially I thought it was the batch_size that was having the issue since that's the error output. After running with your code I noticed that it was having an issue with self.out (which uses batch_size as a parameter).

self.out needs to be moved to the build function:

class CustomMaxPooling2D(tf.keras.layers.Layer):
def __init__(self):
super(CustomMaxPooling2D, self).__init__()
self.padding_format = tf.constant([[0, 0], [1, 1], [1, 1], [0, 0]])
self.transpose_format = tf.constant([0, 3, 1, 2])
self.reverse_transpose_format = tf.constant([0, 2, 3, 1])
self.batch_size = tf.Variable(128, dtype=tf.int32)
def build(self, input_shape):
_, self.height, self.width, self.channels = input_shape
self.output_width = tf.floor((self.height-1)/2)+1
self.out= tf.Variable(tf.zeros(shape=(self.batch_size, self.channels, self.width, self.width)))       
def call(self, inputs):
if self.width%self.output_width!=0:
inputs = tf.pad(tensor=inputs, paddings=self.padding_format, mode='constant')
inputs = tf.transpose(inputs, perm=self.transpose_format)
for b in range(self.batch_size):
for c in range(self.channels):
input_data = inputs[b][c]
for w in range(self.output_width):
for h in range(self.output_width):
self.out[int(b),int(c),int(w),int(h)].assign(tf.reduce_max(input_data[int(w)*2:(int(w)+1)*2, int(h)*2:(int(h)+1)*2]))
return tf.transpose(self.out, perm=self.reverse_transpose_format)

huangapple
  • 本文由 发表于 2023年7月3日 20:45:21
  • 转载请务必保留本文链接:https://go.coder-hub.com/76604875.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定