c++ Eigen: 将切片和分片的张量相加与使用嵌套循环时结果不同(错误)。

huangapple go评论64阅读模式
英文:

c++ Eigen: Why does summing up a chipped and sliced tensor give a different (wrong) result as opposed to when you use nested loops?

问题

以下是您提供的代码的中文翻译部分:

#include <unsupported/Eigen/CXX11/Tensor>
#include <Eigen/Core>

Eigen::Tensor<double, 5> test_tensor(3, 3, 2, 1, 1);
test_tensor.setValues({
  {{{{1.1}}, {{1.1}}}, {{{0}}, {{0}}}, {{{0}}, {{0}}}},
  {{{{0}}, {{0}}}, {{{1}}, {{1}}}, {{{0}}, {{0}}}},
  {{{{0}}, {{0}}}, {{{0}}, {{0}}}, {{{1}}, {{1}}}}
});

// 使用chip和slice来计算子张量的和
template <typename TensorType>
auto tensor_sum(const TensorType& tensor) -> typename TensorType::Scalar {
  using T = typename TensorType::Scalar;
  T sum = 0; // 暂时的
  for (int i = 0; i < tensor.size(); ++i) {
    sum += tensor.data()[i];
  }
  return sum;
}

Eigen::Tensor<double, 3> field_slice;
for (int l = 0; l < 3; ++l) {
  for (int m = 0; m < 3; ++m) {
    auto field_slice_ = test_tensor.chip(1, m).chip(0, l);
    field_slice = field_slice_.slice(
                    Eigen::array<Eigen::Index, 3>({0, 0, 0}),
                    Eigen::array<Eigen::Index, 3>({2, 1, 1}));
    std::cout << "sum slice " << l << " " << m << " " << tensor_sum(field_slice) << std::endl;
  }
}

// 使用嵌套循环来计算子张量的和
double sum;
for (int l = 0; l < 3; ++l) {
  for (int m = 0; m < 3; ++m) {
    sum = 0;
    for (int i = 0; i < 2; ++i) {
      for (int j = 0; j < 1; ++j) {
        for (int k = 0; k < 1; ++k) {
          sum += test_tensor(l, m, i, j, k);
        }
      }
    }
    std::cout << "sum nested loops " << l << " " << m << " " << sum << std::endl;
  }
}

请注意,我只翻译了您提供的代码,没有包括任何问题的回答。如果您有其他疑问或需要进一步的解释,请随时提出。

英文:

I have the following example:

#include &lt;unsupported/Eigen/CXX11/Tensor&gt;
#include &lt;Eigen/Core&gt;
Eigen::Tensor&lt;double, 5&gt; test_tensor(3,3,2,1,1);
test_tensor.setValues({
{{{{1.1}},{{1.1}}},{{{0}},{{0}}},{{{0}},{{0}}}},
{{{{0}},  {{0}}},  {{{1}},{{1}}},{{{0}},{{0}}}},
{{{{0}},  {{0}}},  {{{0}},{{0}}},{{{1}},{{1}}}}
});
//use chip and slice to compute the subtensor sum
template &lt;typename TensorType&gt;
auto tensor_sum(const TensorType&amp; tensor) -&gt; typename TensorType::Scalar {
using T = typename TensorType::Scalar;
T sum = 0; //provisory
for (int i = 0; i &lt; tensor.size(); ++i) {
sum += tensor.data()[i];
}
return sum;
}
Eigen::Tensor&lt;double, 3&gt; field_slice;
for (int l = 0; l &lt; 3; ++l) {
for (int m = 0; m &lt; 3; ++m) {
auto field_slice_ = test_tensor.chip(1, m).chip(0, l);
field_slice = field_slice_.slice(
Eigen::array&lt;Eigen::Index, 3&gt;({0, 0, 0}),
Eigen::array&lt;Eigen::Index, 3&gt;({2, 1, 1}));
std::cout &lt;&lt; &quot;sum slice &quot; &lt;&lt; l &lt;&lt; &quot; &quot; &lt;&lt; m &lt;&lt; &quot; &quot; &lt;&lt; tensor_sum(field_slice) &lt;&lt; std::endl;
}
}
// use nested loops to compute subtensor sum
double sum;
for (int l = 0; l &lt; 3; ++l) {
for (int m = 0; m &lt; 3; ++m) {
sum = 0;
for (int i = 0; i &lt; 2; ++i) {
for (int j = 0; j &lt; 1; ++j) {
for (int k = 0; k &lt; 1; ++k) {
sum += test_tensor(l, m, i, j, k);
}
}
}
std::cout &lt;&lt; &quot;sum nested loops &quot; &lt;&lt; l &lt;&lt; &quot; &quot; &lt;&lt; m &lt;&lt; &quot; &quot; &lt;&lt; sum &lt;&lt; std::endl;
}
}

Which prints out

sum slice 0 0 0
sum slice 0 1 0
sum slice 0 2 1.1
sum slice 1 0 1
sum slice 1 1 1
sum slice 1 2 1.1
sum slice 2 0 1
sum slice 2 1 1
sum slice 2 2 1.1
sum nested loops 0 0 2.2
sum nested loops 0 1 0
sum nested loops 0 2 0
sum nested loops 1 0 0
sum nested loops 1 1 2
sum nested loops 1 2 0
sum nested loops 2 0 0
sum nested loops 2 1 0
sum nested loops 2 2 2

Why are the results different? I suspect that the chip or slice operation isn't working the way it's supposed to. Separating the two steps by first storing the chipped tensor and then creating the slice also didn't change the result. How can I compute the sum of my subtensor without using nested loops?

答案1

得分: 1

你在这里索引的方式有问题:

auto field_slice_ = test_tensor.chip(1, m).chip(0, l);

请阅读文档:
Eigen-unsupported: Eigen Tensors

> ## &lt;Operation&gt; chip(const Index offset, const Index dim)
>
> 一个 chip 是一种特殊的切片。它是在给定维度 dim 中的给定偏移量的子张量。返回的张量比输入张量少一个维度:维度 dim 被移除。

所以基本上你把参数传反了。

修正版本:

auto field_slice_ = test_tensor.chip(m, 1).chip(l, 0);

或者同样的结果:

auto field_slice_ = test_tensor.chip(m, 0).chip(l, 0);
英文:

You are indexing in wrong way here:

auto field_slice_ = test_tensor.chip(1, m).chip(0, l);

Please read documentation:
Eigen-unsupported: Eigen Tensors

> ## &lt;Operation&gt; chip(const Index offset, const Index dim)
>
> A chip is a special kind of slice. It is the subtensor at the given offset in the dimension dim. The returned tensor has one fewer dimension than the input tensor: the dimension dim is removed.
>
> For example, a matrix chip would be either a row or a column of the input matrix.

So basically you feeded arguments in wrong order.

Fixed version:

auto field_slice_ = test_tensor.chip(m, 1).chip(l, 0);

or same result:

auto field_slice_ = test_tensor.chip(m, 0).chip(l, 0);

huangapple
  • 本文由 发表于 2023年5月17日 19:39:45
  • 转载请务必保留本文链接:https://go.coder-hub.com/76271698.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定