使用K.tile()复制张量。

huangapple go评论64阅读模式
英文:

Copy tensor using K.tile()

问题

我有一个张量 (None, 196),在重塑之后,它变成了 (None, 14, 14)
现在,我想要复制它到通道轴,使形状变成 (None, 14, 14, 512)。最后,我想要复制到时间步轴,使其变成 (None, 10, 14, 14, 512)。我使用以下代码段完成了这些步骤:

def replicate(tensor, input_target):
  batch_size = K.shape(tensor)[0]
  nf, h, w, c = input_target
  x = K.reshape(tensor, [batch_size, 1, h, w, 1])
        
  # 复制到通道维度
  x = K.tile(x, [batch_size, 1, 1, 1, c])
        
  # 复制到时间步维度
  x = K.tile(x, [batch_size, nf, 1, 1, 1])

  return x

x = ...
x = Lambda(replicate, arguments={'input_target': input_shape})(x)
another_x = Input(shape=input_shape) # 形状 (10, 14, 14, 512)
        
x = layers.multiply([x, another_x])
x = ...

我绘制了模型,输出形状就像我想要的那样。但是,在模型训练中出现了问题。我将批处理大小设置为2。这是错误消息:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [8,10,14,14,512] vs. [2,10,14,14,512] [[{{node multiply_1/mul}} = Mul[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/multiply_1/mul_grad/Sum"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](Lambda_2/Tile_1, _arg_another_x_0_0/_189)]] [[{{node metrics/top_k_categorical_accuracy/Mean_1/_265}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_6346_metrics/top_k_categorical_accuracy/Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

看起来,K.tile() 将批处理大小从2增加到8。当我将批处理大小设置为10时,它变成了1000。

所以,我的问题是如何实现我想要的结果?使用 tile() 是一个好方法吗?还是应该使用 repeat_elements()?谢谢!

我正在使用 Tensorflow 1.12.0 和 Keras 2.2.4。

英文:

I have tensor (None, 196) and after reshaping, it becomes (None, 14, 14).
And now, I want to copy it to channel axis, so that the shape should be (None, 14, 14, 512). Lastly, I want to copy to timestep axis, so it becomes (None, 10, 14, 14, 512). I accomplish those steps using this snippet code:

def replicate(tensor, input_target):
  batch_size = K.shape(tensor)[0]
  nf, h, w, c = input_target
  x = K.reshape(tensor, [batch_size, 1, h, w, 1])
        
  # Replicate to channel dimension
  x = K.tile(x, [batch_size, 1, 1, 1, c])
        
  # Replicate to timesteps dimension
  x = K.tile(x, [batch_size, nf, 1, 1, 1])

  return x

x = ...
x = Lambda(replicate, arguments={'input_target':input_shape})(x)
another_x = Input(shape=input_shape) # shape (10, 14, 14, 512)
        
x = layers.multiply([x, another_x])
x = ...

I plot the model and the output shape is just like I want it to be. But, the problem arises in model training. I set the batch size to 2. This the the error message:

tensorflow.python.framework.errors_impl.InvalidArgumentError: Incompatible shapes: [8,10,14,14,512] vs. [2,10,14,14,512]
[[{{node multiply_1/mul}} = Mul[T=DT_FLOAT, _class=["loc:@training/Adam/gradients/multiply_1/mul_grad/Sum"], _device="/job:localhost/replica:0/task:0/device:GPU:0"](Lambda_2/Tile_1, _arg_another_x_0_0/_189)]]
[[{{node metrics/top_k_categorical_accuracy/Mean_1/_265}} = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_6346_metrics/top_k_categorical_accuracy/Mean_1", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]()]]

Looks like, K.tile() increases the batch size from 2 to 8. When I set the batch size to 10, it becomes 1000.

So, my question is how to achieve the result as I want? Is it good way to use tile()? Or, should I use repeat_elements()? Thanks!

I am using Tensorflow 1.12.0 and Keras 2.2.4.

答案1

得分: 2

作为一个经验法则,尽量避免将批处理大小传递到“Lambda”层中的变换中。

当您使用“tile”操作时,只需设置需要更改的维度(例如,在您的“tile”操作中,您设置了“batch_size”值,这是错误的)。此外,我正在使用“tf.tile”而不是“K.tile”(似乎TF 1.12后端中没有Keras后端的“tile”)。

def replicate(tensor, input_target):
  
  _, nf, h, w, c = input_target
  x = K.reshape(tensor, [-1, 1, h, w, 1])

  # 复制到通道维度
  # 您也可以将下面的行组合成tf.tile(x, [1, nf, 1, 1, c])
  x = tf.tile(x, [1, 1, 1, 1, c])
  # 复制到时间步维度
  x = tf.tile(x, [1, nf, 1, 1, 1])
  

  return x

简单示例

input_shape= [None, 10, 14, 14, 512]
x = Input(shape=(196,))
x = Lambda(replicate, arguments={'input_target':input_shape})(x)
print(x.shape)

这将得到

>>> (?, 10, 14, 14, 512)

请注意,这里的翻译仅包括代码部分的内容。

英文:

As a rule of thumb, try to avoid bringing batch size to the transformations happening in the Lambda layer.

When you use tile operation, you only set only the dimension that needs to change (for example you had batch_size value in your tile operation which is wrong). Also I am using tf.tile instead of K.tile (TF 1.12 doesn't have tile in the Keras backend it seems).

def replicate(tensor, input_target):
  
  _, nf, h, w, c = input_target
  x = K.reshape(tensor, [-1, 1, h, w, 1])

  # Replicate to channel dimension
  # You can combine below lines to tf.tile(x, [1, nf, 1, 1, c]) as well
  x = tf.tile(x, [1, 1, 1, 1, c])
  # Replicate to timesteps dimension
  x = tf.tile(x, [1, nf, 1, 1, 1])
  

  return x

Simple example

input_shape= [None, 10, 14, 14, 512]
x = Input(shape=(196,))
x = Lambda(replicate, arguments={'input_target':input_shape})(x)
print(x.shape)

Which gives

>>> (?, 10, 14, 14, 512)

huangapple
  • 本文由 发表于 2020年1月3日 17:57:29
  • 转载请务必保留本文链接:https://go.coder-hub.com/59576450.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定