无法理解为什么这段 Haskell 代码运行如此迅速。

huangapple go评论73阅读模式
英文:

Cannot understand why this Haskell code works so fast

问题

以下是翻译好的部分:

In my attempt to learn Haskell,
i have written the following piece of code to solve a classic optimization problem.
The problem at hand is to compute the sales maximizing prices, where the price is monotonically increasing,
given a sequence of i buyers, each of which will buy at a maximup price of v_i.
In mathematical terms:
given [v_i] , find [p_i] s.t. p_{i+1} >= p_i that maximises \sum_i q(v_i,p_i)
where q(a,b)=0, if b>a, q(a,b)=b b<=a

我试图学习 Haskell 时,编写了以下代码来解决一个经典的优化问题。问题是计算销售价格的最大化,其中价格是单调递增的,给定一系列 i 个买家,每个买家的最大价格为 v_i。在数学术语中:给定 [v_i],找到 [p_i],使得 p_{i+1} >= p_i 以最大化 \sum_i q(v_i,p_i),其中 q(a,b)=0,如果 b>a,则 q(a,b)=b,b<=a。

I have implemented the following code, solving the problem using what i think is a top-down dynamic programming approach.
The algorithm decides at each step whether it will increase the price, by maximising all over the remaining sequence

我实现了以下代码,使用我认为是自顶向下的动态规划方法解决了这个问题。算法在每一步决定是否增加价格,通过在剩余序列上最大化。

maxP p [] = (0,p)
maxP p q 
   | a&lt;p = (fst (maxP p (tail q)),p)
   | a==p = (p+fst (maxP p (tail q)),p)
   | otherwise =(maximum l,p+argmax l)
     where a=head q
           pc=

l=zipWith (+) pc $ map (fst) ( map ((flip maxP) (tail q)) pc )

The code is -as expected when using Haskell- an almost 1-1 implementation of a DP algorithm.
The code returns the ( Sales Sum,Price Level)

代码-使用 Haskell 时-如预期的那样几乎是一个动态规划算法的一对一实现。代码返回 (销售总和, 价格水平)。

And, in order to have all the price sequence, a function is called for all [v_i]

为了获得所有价格序列,需要为所有 [v_i] 调用一个函数

maxPI a [] b = reverse b
maxPI a c b = maxPI q (tail c) (q:b)
     where (p,q) = maxP a c 

I have also implemented helper functions

我还实现了辅助函数

argmax::[Int]-&gt;Int
argmax x = mP x (-1) 0 0

mP::[Int]-&gt;Int-&gt;Int-&gt;Int-&gt;Int
mP [] maxi pos cpos = pos
mP a maxi pos cpos
     | ((head a)&gt; maxi) = mP (tail a) (head a) cpos (cpos+1)
     |otherwise = mP (tail a) maxi pos (cpos+1)

Obviously, the function could (should) be optimized, to use only one run of the algorithm over the list

显然,这个函数可以(应该)进行优化,只需要在列表上运行一次算法

But my question is that, even without the aforemention optimization, the algorithm runs surprisingly fast.
So my question is the following:
Why this algorithm works so fast?

但是我的问题是,即使没有上述优化,这个算法运行得非常快。所以我的问题是:
为什么这个算法运行得如此之快?

Am i simply mis-understanding the complexity of the DP algortihm?
Does Haskell employs a by default memoization of the function maxP?

我是不是只是对动态规划算法的复杂性有误解?Haskell 是否默认使用了函数 maxP 的记忆化?

Furthermore, i dislike my Haskell-ness of my code. Could you please make any suggestions?

此外,我不喜欢我的代码中的 Haskell 风格。您能否提出一些建议?

I was expecting a much slower performance

我原本期望性能要慢得多。

英文:

In my attempt to learn Haskell,
i have written the following piece of code to solve a classic optimization problem.
The problem at hand is to compute the sales maximizing prices, where the price is monotonically increasing,
given a sequence of i buyers, each of which will buy at a maximup price of v_i.
In mathematical terms:
given [v_i] , find [p_i] s.t. p_{i+1} >= p_i that maximises \sum_i q(v_i,p_i)
where q(a,b)=0, if b>a, q(a,b)=b b<=a

I have implemented the following code, solving the problem using what i think is a top-down dynamic programming approach.
The algorithm decides at each step whether it will increase the price, by maximising all over the remaining sequence

maxP::Int-&gt;[Int]-&gt;(Int,Int)
maxP p [] = (0,p)
maxP p q 
   | a&lt;p = (fst (maxP p (tail q)),p)
   | a==p = (p+fst (maxP p (tail q)),p)
   | otherwise =(maximum l,p+argmax l)
     where a=head q
           pc=

l=zipWith (+) pc $ map (fst) ( map ((flip maxP) (tail q)) pc )

The code is -as expected when using Haskell- an almost 1-1 implementation of a DP algorithm.
The code returns the ( Sales Sum,Price Level)

And, in order to have all the price sequence, a function is called for all [v_i]

maxPI::Int-&gt;[Int]-&gt;[Int]-&gt;[Int]
maxPI a [] b = reverse b
maxPI a c b = maxPI q (tail c) (q:b)
     where (p,q) = maxP a c 

I have also implemented helper functions

argmax::[Int]-&gt;Int
argmax x = mP x (-1) 0 0

mP::[Int]-&gt;Int-&gt;Int-&gt;Int-&gt;Int
mP [] maxi pos cpos = pos
mP a maxi pos cpos
     | ((head a)&gt; maxi) = mP (tail a) (head a) cpos (cpos+1)
     |otherwise = mP (tail a) maxi pos (cpos+1)

Obviously, the function could (should) be optimized, to use only one run of the algorithm over the list
But my question is that, even without the aforemention optimization, the algorithm runs surprisingly fast.
So my question is the following:
Why this algorithm works so fast?

Am i simply mis-understanding the complexity of the DP algortihm?
Does Haskell employs a by default memoization of the function maxP?

Furthermore, i dislike my Haskell-ness of my code. Could you please make any suggestions?

I was expecting a much slower performance

答案1

得分: 7

以下是您要翻译的部分:

I don't know why your intuition about how fast it should be is wrong. But I'll answer the concrete questions you have that don't require me to live inside your head:

Does Haskell employs a by default memoization of the function maxP?

Haskell, the language, has no opinion on whether maxP should be memoized or not by language implementations. GHC, the most popular implementation, will not memoize maxP as it is written here.

i dislike my Haskell-ness of my code. Could you please make any suggestions?

I have a few suggestions. The most obvious one is to use pattern-matching instead of head and tail. Like this:

maxP::Int->Int->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as)
   | a<p = (fst (maxP p as),p)
   | a==p = (p+fst (maxP p as),p)
   | otherwise = (maximum l,p+argmax l)
     where pc=

l=zipWith (+) pc $ map fst (map (flip maxP as) pc) maxPI::Int->[Int]->[Int]->[Int] maxPI a [] b = reverse b maxPI a c@(_:ct) b = maxPI q ct (q:b) where (p,q) = maxP a c mP::[Int]->Int->Int->Int->Int mP [] maxi pos cpos = pos mP (a:as) maxi pos cpos | (a> maxi) = mP as a cpos (cpos+1) |otherwise = mP as maxi pos (cpos+1)

您提到了一些额外的括号。有时,这对于可读性可能有用,但在这种情况下,它们不是很适合我。

-           l=zipWith (+) pc $ map (fst)  ( map ((flip maxP) as) pc )
+           l=zipWith (+) pc $ map fst (map (flip maxP as) pc)
-     | (a> maxi) = mP as a cpos (cpos+1)
+     | a>maxi = mP as a cpos (cpos+1)

maxP 中,您可以使用 compare 一次计算所有三个保护条件。

maxP::Int->Int->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as) = case compare a p of
   LT -> (fst (maxP p as),p)
   EQ -> (p+fst (maxP p as),p)
   GT -> (maximum l,p+argmax l)
     where pc=

l=zipWith (+) pc $ map fst (map (flip maxP as) pc)

您计算 l 的方式可以更清晰地使用单个 map 或列表推导式完成。

-     where pc=

- l=zipWith (+) pc $ map fst (map (flip maxP as) pc) + where l=map (\p' -> p'+fst (maxP p' as))

-- 或者 + where l=

]

您可以考虑在同一次遍历中计算最大值的索引和最大值本身,并重复使用现有的库函数,如下所示:

   GT -> (maxv,p+maxi)
     where l=

] (maxv, maxi) = maximum (zip l [0..])

这个 argmax 将返回最新的最大值,与您的解决方案不同,它返回最早的最大值。我不确定这是否重要。如果重要,您可以使用 Arg 来避免在比较中使用索引或 Down 来使用 "另一种方式"。

您提到了几次 fst (maxP _ as)。可能值得在这里做一些 DRY 处理。

maxP::Int->Int->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as) = case compare a p of
   LT -> (go p,p)
   EQ -> (p+go p,p)
   GT -> (maxv,p+maxi)
     where l=

] (maxv, maxi) = maximum (zip l [0..]) go p' = fst (maxP p' as)

实际上,仔细阅读后,我意识到您正在进行的整个 p+maxi 计算只是为了恢复您已经掌握的 p' 值!所以,更好的方法是:

   GT -> maximum [(p'+go p', p') | p' <- 

]

我在这最后一部分一直为自己制定规则,这让我感到有点混乱。这与 maximum [maxP p' as | p' <-

] 相同吗?无论如何,在这一点上,现在清楚的是 EQGT 情况实际上在做相同的事情。所以,让我们将它们合并。我们将重新回到使用保护条件,实际上是哈哈!

maxP::Int->Int->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as)
   | a<p = (go p,p)
   | otherwise = maximum [(p'+go p', p') | p' <- 

] where go p' = fst (maxP p' as)

maxPI 中,没有必要在构建列表时以相反的方式构建它,并在最后使用 reverse

maxPI::Int->[Int]->[Int]
maxPI a [] = []
maxPI a

<details>
<summary>英文:</summary>

I don&#39;t know why your intuition about how fast it should be is wrong. But I&#39;ll answer the concrete questions you have that don&#39;t require me to live inside your head:

&gt; Does Haskell employs a by default memoization of the function maxP?

Haskell, the language, has no opinion on whether `maxP` should be memoized or not by language implementations. GHC, the most popular implementation, will not memoize `maxP` as it is written here.

&gt; i dislike my Haskell-ness of my code. Could you please make any suggestions?

I have a few suggestions. The most obvious one is to use pattern-matching instead of `head` and `tail`. Like this:

maxP::Int->[Int]->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as)
| a<p = (fst (maxP p as),p)
| a==p = (p+fst (maxP p as),p)
| otherwise =(maximum l,p+argmax l)
where pc=


l=zipWith (+) pc $ map (fst) ( map ((flip maxP) as) pc )

maxPI::Int->[Int]->[Int]->[Int]
maxPI a [] b = reverse b
maxPI a c@(_:ct) b = maxPI q ct (q:b)
where (p,q) = maxP a c

mP::[Int]->Int->Int->Int->Int
mP [] maxi pos cpos = pos
mP (a:as) maxi pos cpos
| (a> maxi) = mP as a cpos (cpos+1)
|otherwise = mP as maxi pos (cpos+1)


You have quite a few extraneous parentheses. Sometimes that can be useful for readability, but they&#39;re not really doing it for me in this situation.

  •       l=zipWith (+) pc $ map (fst)  ( map ((flip maxP) as) pc )
    
  •       l=zipWith (+) pc $ map fst (map (flip maxP as) pc)
    
  • | (a&gt; maxi) = mP as a cpos (cpos+1)
    
  • | a&gt;maxi = mP as a cpos (cpos+1)
    

In `maxP`, you can use `compare` to compute all three guards at once.

maxP::Int->[Int]->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as) = case compare a p of
LT -> (fst (maxP p as),p)
EQ -> (p+fst (maxP p as),p)
GT -> (maximum l,p+argmax l)
where pc=


l=zipWith (+) pc $ map fst (map (flip maxP as) pc)


Your computation of `l` can be done more clearly with a single `map`, or via list comprehension.

  • where pc=

  •       l=zipWith (+) pc $ map fst (map (flip maxP as) pc)
    
  • where l=map (\p&#39; -&gt; p&#39;+fst (maxP p&#39; as)) 

    -- OR

  • where l=

    ]


You could consider computing the index of the maximum value and the maximum value itself in the same traversal, and reuse existing library functions, like this:

GT -> (maxv,p+maxi)
where l=

]
(maxv, maxi) = maximum (zip l [0..])


This `argmax` will return the latest maximum, unlike your solution, which returns the earliest maximum. I&#39;m not sure whether that matters. If it does, you could use [`Arg`](https://hackage.haskell.org/package/base-4.17.0.0/docs/Data-Semigroup.html#t:Arg) to avoid using the index in the comparison or [`Down`](https://hackage.haskell.org/package/base-4.17.0.0/docs/Data-Ord.html#t:Down) to use it &quot;the other way&quot;.

You mention `fst (maxP _ as)` a couple times. Might be worth doing a DRY thing there.

maxP::Int->[Int]->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as) = case compare a p of
LT -> (go p,p)
EQ -> (p+go p,p)
GT -> (maxv,p+maxi)
where l=

]
(maxv, maxi) = maximum (zip l [0..])
go p' = fst (maxP p' as)


Actually, reading more carefully, it occurs to me that the whole `p+maxi` computation you&#39;re doing is just to recover the `p&#39;` value you already have in hand! So, better:

GT -> maximum [(p'+go p', p') | p' <-

]


I keep twisting myself in knots over this last bit. Is that the same as `maximum [maxP p&#39; as | p&#39; &lt;- 

]`? Anyway, at this point, it&#39;s clear now that the `EQ` and `GT` cases are actually doing the same thing. So let&#39;s merge them. We&#39;ll move back to guards now, actually, hah!

maxP::Int->[Int]->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as)
| a<p = (go p,p)
| otherwise = maximum [(p'+go p', p') | p' <-

]
where go p' = fst (maxP p' as)


In `maxPI`, there&#39;s no real reason to build up the list backwards and `reverse` when you can just build it forwards to begin with.

maxPI::Int->[Int]->[Int]
maxPI a [] = []
maxPI a c@(_:ct) = q:maxPI q ct
where (p,q) = maxP a c


This can probably be done as a scan, though I&#39;m a bit less confident of this transformation.

maxPI::Int->[Int]->[Int]
maxPI a cs = scanl (\a' cs' -> snd (maxP a' cs')) a (tails cs)


I&#39;m not sure I love that change, but it&#39;s one to be aware of. All told, that leaves us with this code:

maxP::Int->[Int]->(Int,Int)
maxP p [] = (0,p)
maxP p (a:as)
| a<p = (go p,p)
| otherwise = maximum [(p'+go p', p') | p' <-

]
where go p' = fst (maxP p' as)

maxPI::Int->[Int]->[Int]
maxPI a cs = scanl (\a' cs' -> snd (maxP a' cs')) a (tails cs)


This feels like pretty idiomatic Haskell to me. If you want to go up from here, you need to start thinking about algorithmic changes, not style changes.

Here&#39;s what it might look like to tweak this algorithm so that it shares computations appropriately, i.e. is a dynamic programming solution. First we define a type for tracking the info we&#39;re interested in, namely, a minimal price the current solution applies for, the payout we can get assuming we always pay more than the minimum, and the actual prices we should offer to get that payout.

import Data.List

data Path = Path
{ payout :: Int
, minPrice :: Int
, prices :: [Int]
} deriving (Eq, Ord, Read, Show)


We&#39;ll name some simple operations on these table entries. The first is to extend the current table entry under the assumption that we demand the minimum price from the current customer, given the maximum price the current customer is willing to pay.

demandMin :: Int -> Path -> Path
demandMin maxPrice path = path
{ payout = payout path + if curPrice <= maxPrice then curPrice else 0
, prices = curPrice : prices path
} where
curPrice = minPrice path


The second operation expresses our preference on payouts. It takes two entries in our table and picks the one with a better payout. If we were being robust, we&#39;d also take the smaller minimum price, but we&#39;re going to arrange that the first argument always has the smaller minimum, so we can cheat and always take that.

maxPayout :: Path -> Path -> Path
maxPayout p p' = if payout p >= payout p' then p else p' { minPrice = minPrice p }


With these operations in place, we can write our table-update operator. Each column of our table has an entry for each possible minimum price, and we will assume the incoming column has them in order of lowest minimum price to highest. Given that, we can fill in the next column to the left by, for each row, taking the better of demanding the current minimum price or whatever excellent plan the row below came up with. Like this:

maxPayouts :: Int -> [Path] -> [Path]
maxPayouts maxPrice = scanr1 maxPayout . map (demandMin maxPrice)


Now, to run the algorithm, we can just initialize our rightmost column, then iteratively fill in columns to the left, finally taking the top-left element of the table as our answer. We have to set up the assumed invariant that rows come in sorted order, but otherwise there is almost no code to write here. So:

top :: [Int] -> Path
top prices = head $ foldr maxPayouts [Path 0 price [] | price <- sort prices] prices


Try it in ghci:

> top [1,2]
Path {payout = 3, startingPrice = 1, prices = [1,2]}
> top [1,3]
Path {payout = 4, startingPrice = 1, prices = [1,3]}
> top [2,1]
Path {payout = 2, startingPrice = 1, prices = [1,1]}
> top [3,1]
Path {payout = 3, startingPrice = 1, prices = [3,3]}
> top [1,5,3]
Path {payout = 7, startingPrice = 1, prices = [1,3,3]}
> top [1,7,3]
Path {payout = 8, startingPrice = 1, prices = [1,7,7]}


(Generally you will not care about the `startingPrice` field, but it&#39;s easier to just return it than to make a fresh data type that doesn&#39;t have it to return.)

It scales well; for example, `top [5,10..1000]` returns essentially instantly for me even without compiling or optimizing. Theoretically it should scale approximately as O(n^2), with n the length of the input list, although I didn&#39;t attempt to verify this empirically.

</details>



huangapple
  • 本文由 发表于 2023年2月9日 01:04:52
  • 转载请务必保留本文链接:https://go.coder-hub.com/75389250.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定