英文:
Wrong shape output with conv1D layer
问题
I'm trying some experiments with a small autoencoder defined like this:
input_layer = Input(shape=input_shape)
x = Conv1D(8, kernel_size=5, activation='relu', padding='same')(input_layer)
x = BatchNormalization()(x)
encoded = Conv1D(4, kernel_size=5, activation='relu', padding='same')(x)
x = Conv1D(8, kernel_size=5, activation='relu', padding='same')(encoded)
x = BatchNormalization()(x)
decoded = Conv1D(WINDOW_SIZE, kernel_size=5, activation='relu', padding='same')(x)
input_shape = 8192
and autoencoder.summary()
returns:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 8192, 1)] 0
conv1d (Conv1D) (None, 8192, 8) 48
batch_normalization (BatchNormalization) (None, 8192, 8) 32
conv1d_1 (Conv1D) (None, 8192, 4) 164
conv1d_2 (Conv1D) (None, 8192, 8) 168
batch_normalization_1 (BatchNormalization) (None, 8192, 8) 32
conv1d_3 (Conv1D) (None, 8192, 8192) 335,872
=================================================================
Total params: 336,316
Trainable params: 336,284
Non-trainable params: 32
Basically, I want an autoencoder that takes a vector of 8192 values and tries to predict another vector of 8192 values.
But when I do an inference like that:
predictions = autoencoder.predict(batch_data, batch_size=BATCH_SIZE)
I have this shape:
>>> predictions[0].shape
(8192, 8192)
What am I doing wrong? What can I change to get a vector with (8192)?
英文:
I'm trying some experiments with a small autoencoder defined like this:
input_layer = Input(shape=input_shape)
x = Conv1D(8, kernel_size=5, activation='relu', padding='same')(input_layer)
x = BatchNormalization()(x)
encoded = Conv1D(4, kernel_size=5, activation='relu', padding='same')(x)
x = Conv1D(8, kernel_size=5, activation='relu', padding='same')(encoded)
x = BatchNormalization()(x)
decoded = Conv1D(WINDOW_SIZE, kernel_size=5, activation='relu', padding='same')(x)
input_shape = 8192
and autoencoder.summary()
returns:
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 8192, 1)] 0
conv1d (Conv1D) (None, 8192, 8) 48
batch_normalization (BatchN (None, 8192, 8) 32
ormalization)
conv1d_1 (Conv1D) (None, 8192, 4) 164
conv1d_2 (Conv1D) (None, 8192, 8) 168
batch_normalization_1 (Batc (None, 8192, 8) 32
hNormalization)
conv1d_3 (Conv1D) (None, 8192, 8192) 335872
=================================================================
Total params: 336,316
Trainable params: 336,284
Non-trainable params: 32
Basically, I want an autoencoder that takes a vector of 8192 values and try to predict an another vector of 8192 values.
but when I do an inference like that:
predictions = autoencoder.predict(batch_data, batch_size=BATCH_SIZE)
I've that shape:
>>> predictions[0].shape
(8192,8192)
What I'm doing wrong ? what can I change to get a vector with (8192) ?
答案1
得分: 1
可能是这样的吧?
import tensorflow as tf
input_layer = tf.keras.layers.Input(shape=(8192, 1))
x = tf.keras.layers.Conv1D(8, kernel_size=5, activation='relu', padding='same')(input_layer)
x = tf.keras.layers.BatchNormalization()(x)
encoded = tf.keras.layers.Conv1D(4, kernel_size=5, activation='relu', padding='same')(x)
x = tf.keras.layers.Conv1D(8, kernel_size=5, activation='relu', padding='same')(encoded)
x = tf.keras.layers.BatchNormalization()(x)
decoded = tf.keras.layers.Conv1D(1, kernel_size=5, activation='relu', padding='same')(x)
model = tf.keras.Model(input_layer, decoded)
model.summary()
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 8192, 1)] 0
conv1d_4 (Conv1D) (None, 8192, 8) 48
batch_normalization_2 (Batc (None, 8192, 8) 32
hNormalization)
conv1d_5 (Conv1D) (None, 8192, 4) 164
conv1d_6 (Conv1D) (None, 8192, 8) 168
batch_normalization_3 (Batc (None, 8192, 8) 32
hNormalization)
conv1d_7 (Conv1D) (None, 8192, 1) 41
=================================================================
Total params: 485
Trainable params: 453
Non-trainable params: 32
_________________________________________________________________
英文:
Maybe something like this?
import tensorflow as tf
input_layer = tf.keras.layers.Input(shape=(8192, 1))
x = tf.keras.layers.Conv1D(8, kernel_size=5, activation='relu', padding='same')(input_layer)
x = tf.keras.layers.BatchNormalization()(x)
encoded = tf.keras.layers.Conv1D(4, kernel_size=5, activation='relu', padding='same')(x)
x = tf.keras.layers.Conv1D(8, kernel_size=5, activation='relu', padding='same')(encoded)
x = tf.keras.layers.BatchNormalization()(x)
decoded = tf.keras.layers.Conv1D(1, kernel_size=5, activation='relu', padding='same')(x)
model = tf.keras.Model(input_layer, decoded)
model.summary()
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 8192, 1)] 0
conv1d_4 (Conv1D) (None, 8192, 8) 48
batch_normalization_2 (Batc (None, 8192, 8) 32
hNormalization)
conv1d_5 (Conv1D) (None, 8192, 4) 164
conv1d_6 (Conv1D) (None, 8192, 8) 168
batch_normalization_3 (Batc (None, 8192, 8) 32
hNormalization)
conv1d_7 (Conv1D) (None, 8192, 1) 41
=================================================================
Total params: 485
Trainable params: 453
Non-trainable params: 32
_________________________________________________________________
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论