Cannot reshape tensor with its real dimensions.

huangapple go评论53阅读模式
英文:

Cannot reshape tensor with its real dimensions

问题

我有一个非常复杂的Transformer模型,需要从头开始计算MRR。

(我认为问题在于数据集批次(1248 = 39*32),但我没有足够的专业知识来解决它)

因此,我编写了这段代码(将问题行分开):

def _count_mrr(self, y_true: tf.Tensor, y_pred: tf.Tensor):
    y_true = tf.reshape(y_true, shape=(1, self.max_length - 1))
    y_pred = tf.reshape(y_pred, shape=(1, self.max_length - 1, self._data_controller._vocab_size))
    y_true = tf.cast(y_true, dtype=tf.dtypes.int32)
    y_true = tf.squeeze(y_true)
    y_pred = tf.squeeze(y_pred)

    y_true = tf.reshape(y_true, shape=(self.max_length - 1, 1))

    y_pred = tf.math.top_k(y_pred, self._data_controller._vocab_size).indices
    where_tensor = tf.equal(y_pred, y_true)
    where_tensor = tf.where(where_tensor)[:, 1]
    where_tensor = tf.cast(tf.add(where_tensor, 1), dtype=tf.dtypes.float64)
    where_tensor = tf.divide(tf.constant(np.ones(self.max_length - 1)), where_tensor)
    return tf.math.reduce_mean(where_tensor)

当我尝试运行这段代码时,它报错:

File "d:\Development\transformer_chatbot\chatbot\transformer.py", line 211, in _count_mrr
  y_true = tf.reshape(y_true, shape=(self.max_length - 1, 1))

Node: 'Reshape_4'
Input to reshape is a tensor with 1248 values, but the requested shape has 39
[[{{node Reshape_4}}]] [Op:__inference_train_function_39181]


但是,如果我尝试运行`y_true = tf.reshape(y_true, shape=(1248, 1))`,我会得到这个错误:

File "d:\Development\transformer_chatbot\chatbot\transformer.py", line 211, in _count_mrr  *
    y_true = tf.reshape(y_true, shape=(1248, 1))

ValueError: Cannot reshape a tensor with 39 elements to shape [1248,1] (1248 elements) for 
'{{node Reshape_4}} = Reshape[T=DT_INT32, Tshape=DT_INT32](Squeeze, Reshape_4/shape)' with
input shapes: [39], [2] and with input tensors computed as partial shapes: input[1] = [1248,1].

[如果需要完整模型,请查看此链接](https://github.com/zer0deck/transformer_chatbot/blob/main/chatbot/transformer.py)
英文:

I have a really complicated transformer model and i need to count MRR from scratch.

(I think the problem with dataset batches (1248 = 3932) but I don't have enough expertise to solve it)*

So i wrote that code (separated the problem line):

    def _count_mrr(self, y_true: tf.Tensor, y_pred: tf.Tensor):
        y_true = tf.reshape(y_true, shape=(1, self.max_length - 1))
        y_pred = tf.reshape(y_pred, shape=(1, self.max_length - 1, self._data_controller._vocab_size))
        y_true = tf.cast(y_true, dtype=tf.dtypes.int32)
        y_true = tf.squeeze(y_true)
        y_pred = tf.squeeze(y_pred)

        y_true = tf.reshape(y_true, shape=(self.max_length - 1, 1))

        y_pred = tf.math.top_k(y_pred, self._data_controller._vocab_size).indices
        where_tensor = tf.equal(y_pred, y_true)
        where_tensor = tf.where(where_tensor)[:, 1]
        where_tensor = tf.cast(tf.add(where_tensor, 1), dtype=tf.dtypes.float64)
        where_tensor = tf.divide(tf.constant(np.ones(self.max_length - 1)), where_tensor)
        return tf.math.reduce_mean(where_tensor)

When I try to run this code it breaks with error:

    File "d:\Development\transformer_chatbot\chatbot\transformer.py", line 211, in _count_mrr
      y_true = tf.reshape(y_true, shape=(self.max_length - 1, 1))
Node: 'Reshape_4'
Input to reshape is a tensor with 1248 values, but the requested shape has 39
	 [[{{node Reshape_4}}]] [Op:__inference_train_function_39181]

BUT!!! if I tried to run y_true = tf.reshape(y_true, shape=(1248, 1)) I got this:

    File "d:\Development\transformer_chatbot\chatbot\transformer.py", line 211, in _count_mrr  *
        y_true = tf.reshape(y_true, shape=(1248, 1))

    ValueError: Cannot reshape a tensor with 39 elements to shape [1248,1] (1248 elements) for 
    '{{node Reshape_4}} = Reshape[T=DT_INT32, Tshape=DT_INT32](Squeeze, Reshape_4/shape)' with
    input shapes: [39], [2] and with input tensors computed as partial shapes: input[1] = [1248,1].

The full model if needed

答案1

得分: 0

以下是已经翻译好的部分:

"So as I mentioned before the problem was in working with tf.Dataset batches.
如我之前提到的,问题出在处理tf.Dataset的批次上。

Data is loaded within 3 dimensions (like (None, maxlen-1, vocab_size)), where None is hidden batch size.
数据加载在三个维度内(例如(None, maxlen-1, vocab_size)),其中None是隐藏的批次大小。

At first tf loads 2 empty attempts with batch size 1 to check, if everything works. But the actual batch size is 32 for me.
起初,tf加载了2个空的尝试,批次大小为1,以检查是否一切正常。但实际的批次大小对我来说是32。

I solved it with a more simplified tf.reshape() combining. I splited all my batches to a single units like:
我通过更简化的tf.reshape()组合来解决了这个问题。我将所有的批次拆分为单个单元,如下所示:

start = tf.Tensor( [
            [[1, 2, 3], [1, 2, 3]], 
            [[1, 2, 3], [1, 2, 3]]
            ], shape=(None (2),2,3))
end = tf.reshape(start, shape=(-1, 3))
>> tf.Tensor( [
            [1, 2, 3], 
            [1, 2, 3], 
            [1, 2, 3],
            [1, 2, 3],
            ], shape=(None (4),3))

and then work with it.
然后用它进行操作。

The final fix:
最终的修复方法:

def _count_mrr(self, y_true: tf.Tensor, y_pred: tf.Tensor):
    y_true = tf.reshape(y_true, shape=(-1, 1))
    y_pred = tf.reshape(y_pred, shape=(-1, self._data_controller._vocab_size))
    y_pred = tf.math.top_k(y_pred, k=self._data_controller._vocab_size).indices
    y_true = tf.cast(y_true, dtype=tf.dtypes.int32)
    where_tensor = tf.equal(y_true,y_pred)
    where_tensor = tf.where(where_tensor)[:, 1]
    where_tensor = tf.cast(tf.add(where_tensor, 1), dtype=tf.dtypes.float64)
    where_tensor = tf.divide(
               tf.constant(np.ones(where_tensor._shape_as_list()[0])), 
               where_tensor)
    return tf.math.reduce_mean(where_tensor)

Notice, that this solution is not perfect and requires a tremendous amount of memory allocation for tensors.
请注意,这个解决方法并不完美,需要大量的内存分配给张量。

I hope that my experience will help other people with the same problem.
希望我的经验能帮助其他遇到相同问题的人。

英文:

So as I mentioned before the problem was in working with tf.Dataset batches.

Data is loaded within 3 dimensions (like (None, maxlen-1, vocab_size)), where None is hidden batch size.

At first tf loads 2 empty attempts with batch size 1 to check, if everything works. But the actual batch size is 32 for me.

I solved it with a more simplified tf.reshape() combining. I splited all my batches to a single units like:

start = tf.Tensor( [
            [[1, 2, 3], [1, 2, 3]], 
            [[1, 2, 3], [1, 2, 3]]
            ], shape=(None (2),2,3))
end = tf.reshape(start, shape=(-1, 3))
>> tf.Tensor( [
            [1, 2, 3], 
            [1, 2, 3], 
            [1, 2, 3],
            [1, 2, 3],
            ], shape=(None (4),3))

and then work with it.

The final fix:

def _count_mrr(self, y_true: tf.Tensor, y_pred: tf.Tensor):
    y_true = tf.reshape(y_true, shape=(-1, 1))
    y_pred = tf.reshape(y_pred, shape=(-1, self._data_controller._vocab_size))
    y_pred = tf.math.top_k(y_pred, k=self._data_controller._vocab_size).indices
    y_true = tf.cast(y_true, dtype=tf.dtypes.int32)
    where_tensor = tf.equal(y_true,y_pred)
    where_tensor = tf.where(where_tensor)[:, 1]
    where_tensor = tf.cast(tf.add(where_tensor, 1), dtype=tf.dtypes.float64)
    where_tensor = tf.divide(
               tf.constant(np.ones(where_tensor._shape_as_list()[0])), 
               where_tensor)
    return tf.math.reduce_mean(where_tensor)

Notice, that this solution is not perfect and requires a tremendous amount of memory allocation for tensors.

I hope that my experience will help other people with the same problem.

huangapple
  • 本文由 发表于 2023年3月3日 23:06:03
  • 转载请务必保留本文链接:https://go.coder-hub.com/75628726.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定