对于表达式中没有常量泛型的解决方法

huangapple go评论59阅读模式
英文:

Workaround for no const generics in expressions

问题

I'm struggling to work around the limitation that in Rust, you cannot use a const generic parameter in an expression to specify a type. I am still learning rust and am trying to make a matrix type where I calculate the determinant using Laplace expansion like this:

#[derive(Clone, Eq, Debug)]
pub struct Matrix<T: Copy + Eq, const M: usize, const N: usize> {
    pub data: [[T; N]; M],
}

impl<T: Copy + Eq + ops::Mul + std::iter::Sum, const S: usize> Matrix<T, S, S>
where
    i32: Into<T>,
{
    pub fn det(&self) -> T {
        if S == 1 {
            self.data[0][0]
        } else {
            (0..S)
                .map(|c| {
                    Into::<T>::into(if c % 2 == 0 { 1 } else { -1 })
                        * self.data[0][c]
                        * Matrix::<T, { S - 1 }, { S - 1 }> {
                            data: core::array::from_fn(|i| {
                                core::array::from_fn(|j| {
                                    self.data[i + 1][if j < c { j } else { j + 1 }]
                                })
                            }),
                        }
                        .det()
                })
                .sum()
        }
    }
}

However, Matrix::<T, {S-1}, {S-1}> is not valid rust! How would I work around this to get my determinant function to work how I want?

英文:

I'm struggling to work around the limitation that in Rust, you cannot use a const generic parameter in an expression to specify a type. I am still learning rust and am trying to make a matrix type where I calculate the determinant using Laplace expansion like this:

#[derive(Clone, Eq, Debug)]
pub struct Matrix&lt;T: Copy + Eq, const M: usize, const N: usize&gt; {
    pub data: [[T; N]; M],
}

impl&lt;T: Copy + Eq + ops::Mul + std::iter::Sum, const S: usize&gt; Matrix&lt;T, S, S&gt;
where
    i32: Into&lt;T&gt;,
{
    pub fn det(&amp;self) -&gt; T {
        if S == 1 {
            self.data[0][0]
        } else {
            (0..S)
                .map(|c| {
                    Into::&lt;T&gt;::into(if c % 2 == 0 { 1 } else { -1 })
                        * self.data[0][c]
                        * Matrix::&lt;T, { S - 1 }, { S - 1 }&gt; {
                            data: core::array::from_fn(|i| {
                                core::array::from_fn(|j| {
                                    self.data[i + 1][if j &lt; c { j } else { j + 1 }]
                                })
                            }),
                        }
                        .det()
                })
                .sum()
        }
    }
}

However, Matrix::&lt;T, {S-1}, {S-1}&gt; is not valid rust! How would I work around this to get my determinant function to work how I want?

答案1

得分: 2

以下是您要翻译的内容:

As is, I don't think your code can be compiled in Rust.

The main problem you are facing is the recursive structure of your trait bounds:

  1. To compute the determinant at size S, you need the determinant at size S - 1, which requires that S - 1 be strictly positive, ie [(); S - 1]: Sized.
  2. However, computing the determinant at size S - 1 will in turn require the determinant at size S - 2, which requires that S - 2 be strictly positive, etc...

Since the Rust compiler only reasons syntactically about bounds -- ie, S - 1 - 1 is different from S - 2 as far as it's concerned -- this leads to requiring an infinite number of bounds.

Traits?

One common mistake in generics in Rust, is attempting to make generics work with structs, rather than traits. In this case, however, I fail to see how traits could be applied without suffering from the infinite recursion issue.

Specialization?

That's a whole other can of worms...

Pivoting

Disclaimer: untested code.

From a pragmatic point of view, I'd recommend abandoning the idea of creating a compile-time sized matrix inside the code.

You're creating matrices of size S - 1, S - 2, ... The sum of squares from 0 to n is n * (n - 1) * (2n - 1) / 6, so that's how much scratch space you'll need, total. You can simply reserve an array of that size1 once, then feed it to helper code as scratch space.

Roughly speaking, I am imagining something like (nightly required):

#![feature(generic_const_exprs)]
#![feature(slice_flatten)]
#![allow(incomplete_features)]

use core::{iter::Sum, ops::Mul};

#[derive(Clone, Debug)]
pub struct Matrix<T: Copy, const M: usize, const N: usize> {
    pub data: [[T; N]; M],
}

impl<T: Copy + Mul<Output = T> + Sum, const S: usize> Matrix<T, S, S>
where
    T: From<i32>,
    [(); S * S * S / 3]: Sized,
{
    pub fn det(&self) -> T {
        let mut buffer = [T::from(0); S * S * S / 3];

        let matrix_ref = SquareMatrixRef {
            size: S,
            data: self.data[..].flatten(),
        };

        matrix_ref.det_in(&mut buffer[..])
    }
}

struct SquareMatrixRef<'a, T: Copy> {
    size: usize,
    data: &'a [T],
}

impl<'a, T: Copy + Mul<Output = T> + Sum> SquareMatrixRef<'a, T>
where
    T: From<i32>,
{
    fn det_in(&self, buffer: &mut [T]) -> T {
        assert_ne!(0, self.size);
    
        if self.size == 1 {
            return self.data[0];
        }

        (0..self.size)
            .map(|c| {
                let sign = T::from(if c % 2 == 0 { 1 } else { -1 });
                let pivot = self.data[c];

                let new_size = self.size - 1;

                for i in 0..new_size {
                    for j in 0..new_size {
                        let new_j = if j < c { j } else { j + 1 };
                    
                        buffer[i * new_size + j] = self.data[(i + 1) * self.size + new_j];
                    }
                }

                let (data, buffer) = buffer.split_at_mut(new_size * new_size);

                let sub_matrix = SquareMatrixRef {
                    size: new_size,
                    data
                };

                sign * pivot * sub_matrix.det_in(buffer)
            })
            .sum()
    }
}

1 Though, given how quickly it grows, maybe you'll need a Vec to avoid a stack overflow. Then again, the calculation will take forever on large S anyway.

Compact

Disclaimer: untested code.

There's something to be said for a more compact solution.

In the end, there's not much advantage in creating a fresh sub-matrix, and using it once (to create newer sub-matrices). It costs quite a bit to create it, and there's no recouping since it's not used much.

What if, instead, we created a view over the original matrix? The rows are easily handled, though the columns are a tad more complicated, so we'll have a stack usage that grows with the square of S... which is still better than growing with its cube.

use core::{array, iter::Sum, ops::{Index, Mul}};

#[derive(Clone, Debug)]
pub struct Matrix<T: Copy, const M: usize, const N: usize> {
    pub data: [[T; N]; M],
}

impl<T: Copy + Mul<Output = T> + Sum, const S: usize> Matrix<T, S, S>
where
    T: From<i32>,
{
    pub fn det(&self) -> T {
        let columns: [usize; S] = array::from_fn(|i| i);

        let view = SquareMatrixView {
            size: S,
            columns: &columns,
            data: &self.data,
        };

        view.det()
    }
}

struct SquareMatrixView<'a, T, const S: usize> {
    size: usize,
    columns: &'a [usize; S],
    data: &'a [[T; S]],
}

impl<'a, T, const S: usize> Index<(usize, usize)> for SquareMatrixView<'a, T, S> {
    type Output = T;

    fn index(&self, index: (usize, usize)) -> &Self::Output {
        &self.data[index.0][self.columns[index.1]]
    }
}

impl<'a, T, const S: usize> SquareMatrixView<'a, T, S>
where
    T: Copy + From<i32> + Mul<Output = T> + Sum,
{
    fn det(&self) -> T {
        assert_ne!(0, self.size);

        if self.size == 1 {
            return self[(0, 0)];
        }
        
        let mut columns = [0; S];
        
        for i in 0..self.size {
            columns[i] = self.columns[i] + 1;
        }

        (0..self.size)
            .map(|c| {
                let sign = T::from(if c % 2 == 0 { 1 } else { -1 });
                let pivot = self[(0, c)];

                let view = SquareMatrixView {
                    size: self.size - 1,
                    columns: &columns,
                    data: &self.data[1..],
                };

                let result = sign * pivot * view.det();

                columns[c] = self

<details>
<summary>英文:</summary>

#### As is, I don&#39;t think your code can be compiled in Rust.

The main problem you are facing is the recursive structure of your trait bounds:

 1. To compute the determinant at size `S`, you need the determinant at size `S - 1`, which requires that `S - 1` be strictly positive, ie `[(); S - 1]: Sized`.
 2. However, computing the determinant at size `S - 1` will in turn require the determinant at size `S - 2`, which requires that `S - 2` be strictly positive, etc...

Since the Rust compiler only reasons _syntactically_ about bounds -- ie, `S - 1 - 1` is different from `S - 2` as far as it&#39;s concerned -- this leads to requiring an infinite number of bounds.

#### Traits?

One common mistake in generics in Rust, is attempting to make generics work with structs, rather than traits. In this case, however, I fail to see how traits could be applied without suffering from the infinite recursion issue.

#### Specialization?

That&#39;s a whole other can of worms...

### Pivoting

_Disclaimer: untested code._

From a pragmatic point of view, I&#39;d recommend abandoning the idea of creating a compile-time sized matrix inside the code.

You&#39;re creating matrices of size `S - 1`, `S - 2`, ... The sum of squares from 0 to n is `n * (n - 1) * (2n - 1) / 6`, so that&#39;s how much scratch space you&#39;ll need, total. You can simply reserve an array of that size&lt;sup&gt;1&lt;/sup&gt; once, then _feed it_ to helper code as scratch space.

Roughly speaking, I am imagining something like (**nightly required**):

```rust
#![feature(generic_const_exprs)]
#![feature(slice_flatten)]
#![allow(incomplete_features)]

use core::{iter::Sum, ops::Mul};

#[derive(Clone, Debug)]
pub struct Matrix&lt;T: Copy, const M: usize, const N: usize&gt; {
    pub data: [[T; N]; M],
}

impl&lt;T: Copy + Mul&lt;Output = T&gt; + Sum, const S: usize&gt; Matrix&lt;T, S, S&gt;
where
    T: From&lt;i32&gt;,
    [(); S * S * S / 3]: Sized,
{
    pub fn det(&amp;self) -&gt; T {
        let mut buffer = [T::from(0); S * S * S / 3];

        let matrix_ref = SquareMatrixRef {
            size: S,
            data: self.data[..].flatten(),
        };

        matrix_ref.det_in(&amp;mut buffer[..])
    }
}

struct SquareMatrixRef&lt;&#39;a, T: Copy&gt; {
    size: usize,
    data: &amp;&#39;a [T],
}

impl&lt;&#39;a, T: Copy + Mul&lt;Output = T&gt; + Sum&gt; SquareMatrixRef&lt;&#39;a, T&gt;
where
    T: From&lt;i32&gt;,
{
    fn det_in(&amp;self, buffer: &amp;mut [T]) -&gt; T {
        assert_ne!(0, self.size);
    
        if self.size == 1 {
            return self.data[0];
        }

        (0..self.size)
            .map(|c| {
                let sign = T::from(if c % 2 == 0 { 1 } else { -1 });
                let pivot = self.data[c];

                let new_size = self.size - 1;

                for i in 0..new_size {
                    for j in 0..new_size {
                        let new_j = if j &lt; c { j } else { j + 1 };
                    
                        buffer[i * new_size + j] = self.data[(i + 1) * self.size + new_j];
                    }
                }

                let (data, buffer) = buffer.split_at_mut(new_size * new_size);

                let sub_matrix = SquareMatrixRef {
                    size: new_size,
                    data
                };

                sign * pivot * sub_matrix.det_in(buffer)
            })
            .sum()
    }
}

<sup>1</sup> Though, given how quickly it grows, maybe you'll need a Vec to avoid a stack overflow. Then again, the calculation will take forever on large S anyway.

Compact

Disclaimer: untested code.

There's something to be said for a more compact solution.

In the end, there's not much advantage in creating a fresh sub-matrix, and using it once (to create newer sub-matrices). It costs quite a bit to create it, and there's no recouping since it's not used much.

What if, instead, we created a view over the original matrix? The rows are easily handled, though the columns are a tad more complicated, so we'll have a stack usage that grows with the square of S... which is still better than growing with its cube.

use core::{array, iter::Sum, ops::{Index, Mul}};

#[derive(Clone, Debug)]
pub struct Matrix&lt;T: Copy, const M: usize, const N: usize&gt; {
    pub data: [[T; N]; M],
}

impl&lt;T: Copy + Mul&lt;Output = T&gt; + Sum, const S: usize&gt; Matrix&lt;T, S, S&gt;
where
    T: From&lt;i32&gt;,
{
    pub fn det(&amp;self) -&gt; T {
        let columns: [usize; S] = array::from_fn(|i| i);

        let view = SquareMatrixView {
            size: S,
            columns: &amp;columns,
            data: &amp;self.data,
        };

        view.det()
    }
}


struct SquareMatrixView&lt;&#39;a, T, const S: usize&gt; {
    size: usize,
    columns: &amp;&#39;a [usize; S],
    data: &amp;&#39;a [[T; S]],
}

impl&lt;&#39;a, T, const S: usize&gt; Index&lt;(usize, usize)&gt; for SquareMatrixView&lt;&#39;a, T, S&gt; {
    type Output = T;

    fn index(&amp;self, index: (usize, usize)) -&gt; &amp;Self::Output {
        &amp;self.data[index.0][self.columns[index.1]]
    }
}

impl&lt;&#39;a, T, const S: usize&gt; SquareMatrixView&lt;&#39;a, T, S&gt;
where
    T: Copy + From&lt;i32&gt; + Mul&lt;Output = T&gt; + Sum,
{
    fn det(&amp;self) -&gt; T {
        assert_ne!(0, self.size);

        if self.size == 1 {
            return self[(0, 0)];
        }
        
        let mut columns = [0; S];
        
        for i in 0..self.size {
            columns[i] = self.columns[i] + 1;
        }

        (0..self.size)
            .map(|c| {
                let sign = T::from(if c % 2 == 0 { 1 } else { -1 });
                let pivot = self[(0, c)];

                let view = SquareMatrixView {
                    size: self.size - 1,
                    columns: &amp;columns,
                    data: &amp;self.data[1..],
                };

                let result = sign * pivot * view.det();

                columns[c] = self.columns[c];

                result
            })
            .sum()
    }
}

This is likely quite faster than creating new matrices again and again, as the number of (immediate) writes per call to det is linear with the size of the sub-matrix, rather than quadratic.

The number of reads is unchanged -- the overall algorithm being unchanged -- but writes being more costly than reads, it should be a performance win at least.

huangapple
  • 本文由 发表于 2023年5月30日 06:49:56
  • 转载请务必保留本文链接:https://go.coder-hub.com/76360709.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定