Strictness 和如何告诉 GHC/GHCi 一次性将值存储在变量中。

huangapple go评论57阅读模式
英文:

Strictness and how to tell GHC/GHCi to just store a value in a variable once and for all

问题

Here's the translated code portion:

抱歉,如果这是一个常见问题,我找不到类似的内容,但我可能只是太不熟练以知道正确的词汇。

这是GHCi中基本问题的一个示例

-- foo是相对昂贵的计算,但它是惰性的,因此它是瞬时的
*Main> foo = (foldl (++) [] (take 5000 $ repeat [10, 123, 323, 33, 11, 345, 23, 33, 23, 11, 987]))
(0.00 秒,62,800 字节)
-- bar是使用foo的结果,但它也是惰性的,因此尚未计算
*Main> bar = last foo
(0.00 秒,62,800 字节)
-- 现在让我们使用bar
*Main> bar
987
(1.82 秒,11,343,660,560 字节)
-- 计算所有内容花了几秒钟,但这没关系。现在让我们再次使用它:
*Main> bar
987
(1.88 秒,11,343,660,560 字节)
-- 假设GHCi记住了该值,它本来可以瞬间完成。

我研究了严格性,但无论我做什么都似乎无济于事。据我理解,严格评估参数应立即执行所有计算,但无论是' seq '还是'$!'都似乎无法启用此行为。例如:

-- 预期:也许评估bar会导致'last foo'被系统地运行,但foo应该严格评估,这应该占用大部分计算时间
*Main> bar = last $! foo
(0.00 秒,62,800 字节)
-- 它实际上是瞬时的。当然,bar需要时间来计算。
*Main> bar
987
(1.88 秒,11,343,660,632 字节)
-- 如果我们使用seq呢?据我理解,它返回其第二个参数,并约束其第一个参数必须是严格的。我们想要'bar = last foo',foo是严格的,因此:
*Main> bar = foo 'seq' (last foo)
(0.00 秒,62,800 字节)
*Main> bar
987
(1.93 秒,11,344,585,344 字节)
-- 没有用。如果我们严格评估bar并将该值分配给变量'baz'呢?
*Main> bar = last foo
(0.00 秒,62,800 字节)
*Main> baz = bar 'seq' bar
(0.00 秒,62,800 字节)
*Main> baz
987
(3.80 秒,22,689,057,424 字节)
-- 现在它计算两次!一次是为了'seq',然后第二次是为了显示该值?
-- 如果我们使用BangPatterns呢?让我们只是将一个变量通过一个虚拟函数,该函数在评估后返回自己的输入(假设)。
*Main> eval !x = x
(0.00 秒,62,800 字节)
*Main> baz = eval bar
(0.00 秒,62,800 字节)
*Main> baz
987
(1.81 秒,11,345,102,464 字节)
-- 仍然没有提前计算。

希望这能帮助您理解您的问题。如果您有其他问题,请随时提出。

英文:

Apologies if this is a common question, I couldn't find anything similar, but I may just be too inexperienced to know the proper vocabulary.

Here's an example of the fundamental problem in GHCi:

-- foo is something relatively expensive to compute, but it's lazy so it's instantaneous
*Main> foo = (foldl (++) [] (take 5000 $ repeat [10, 123, 323, 33, 11, 345, 23, 33, 23, 11, 987]))
(0.00 secs, 62,800 bytes)
-- bar is a result that uses foo, but it's also lazy so it's not computed yet
*Main> bar = last foo
(0.00 secs, 62,800 bytes)
-- Now let's use bar
*Main> bar
987
(1.82 secs, 11,343,660,560 bytes)
-- It took a couple seconds to compute everything, but that's fine. Now let's use it again:
*Main> bar
987
(1.88 secs, 11,343,660,560 bytes)
-- It took 1.88 seconds to presumably recompute everything whereas it
-- could have been instantaneous if GHCi had remembered the value.

I looked into strictness, but nothing I do seems to help. As I understand it, evaluating an argument strictly should cause all computation to be done immediately, yet neither seq nor $! seem to enable this behavior. For instance:

-- expectation: Perhaps evaluating bar will cause 'last foo' to be run systematically,
-- but foo should be evaluated strictly, which should take most of the computation time 
*Main> bar = last $! foo
(0.00 secs, 62,800 bytes)
-- it's actually instantaneous. Sure enough, bar takes time to compute.
*Main> bar
987
(1.88 secs, 11,343,660,632 bytes)
-- what if we use seq? As I understand it, it returns its second argument
-- with the constraint that its first argument must be strict. We want
-- 'bar = last foo' with foo being strict, therefore:
*Main> bar = foo `seq` (last foo)
(0.00 secs, 62,800 bytes)
*Main> bar
987
(1.93 secs, 11,344,585,344 bytes)
-- No dice. What if we evaluate bar strictly and assign that value to a variable 'baz'?
*Main> bar = last foo
(0.00 secs, 62,800 bytes)
*Main> baz = bar `seq` bar
(0.00 secs, 62,800 bytes)
*Main> baz
987
(3.80 secs, 22,689,057,424 bytes)
-- Now it's computing it twice! Once for the `seq` and then a second time to display the value?
-- What if we use BangPatterns? Let's just put a variable through a dummy
-- function that returns its own input (hypothetically) after evaluation.
*Main> eval !x = x
(0.00 secs, 62,800 bytes)
*Main> baz = eval bar
(0.00 secs, 62,800 bytes)
*Main> baz
987
(1.81 secs, 11,345,102,464 bytes)
-- still not doing the computation ahead of time.

So It looks like I fundamentally don't understand what 'strictness' is, and I still can't get GHCi to store anything but thunks into variables. Is this just a quirk of GHCi, or does this apply to GHC? How do I get my code to just store a normal-ass value like 987 in a variable?

答案1

得分: 8

以下是翻译好的内容:

Type-class polymorphism

Joseph Sible 的回答指出了这一点,但我认为还有一些细节值得讨论。如果你在 GHCi 中输入 x = 1 + 1x 将具有类型 Num a => a;它是多态的,类型变量上有一个约束。在运行时,这些值被表示为一个函数,带有一个参数,基本上是你想要使用的类型的 Num 实例(运行时实例被称为“字典”)。

这是因为你可以键入 print (x :: Integer),而 GHCi 必须找出 1 + 1Integer 类型下的值是什么,或者键入 print (x :: Complex Double),编译器必须找出 1 + 1Complex Double 类型下的值是什么;这两种类型使用不同的 + 定义(以及定义整数字面值的 fromInteger 函数的不同)。甚至在定义 x 之后,可以创建一个全新的类型,给它一个 Num 实例,然后仍然必须能够在 print (x :: MyNewNumType) 中使用它。所有这些基本上意味着 x :: Num a => a 不能在定义时一劳永逸地进行评估;相反,它被存储为一个函数,并且每次将其用作某个特定类型时都会重新进行评估,以便其定义可以使用与此使用的类型相适应的 +fromInteger 函数。

在“正常”的 Haskell 中,这通常不是一个很大的问题,它是在模块中编写而不是输入到 GHCi 提示符中。在正常的 Haskell 中,编译器将能够分析整个模块,以查看 x 在哪里被使用,因此它通常可以推断出 x 的特定类型(也就是说,如果在模块的其他地方使用 x 以使用 !! 运算符对列表进行索引,该运算符只接受 Int 作为索引,因此 x 将被推断为类型 Int,而不是更通用的 Num a => a)。如果因为模块中的用法站点不要求 x 是特定类型而失败,那么还有单态性限制将强制将 x 分配给具体类型,触发默认规则,这种情况下将选择 Integer。因此,通常情况下,只有在明确请求该类型的类型签名时,才会出现像 Num a => a 这样的类型。

但是,在 GHCi 中禁用了单态性限制。实际上,它特别被添加到语言中,以避免由于类型类多态值在每次使用时都重新计算而不是缓存而导致的性能问题。但是,只有在应用于整个模块,编译器可以看到变量的正确单态类型的情况下才会产生合理的结果。在 GHCi 中,你逐行输入事物,强制编译器在只看到定义后立即选择单态类型的情况下非常不好用;它经常会为你打算的用法分配错误的类型,然后在尝试使用它时生成虚假的类型错误。因此,最终单态性限制默认在 GHCi 中被禁用,这使得它更加可用,但存在性能问题,因为值被重新计算了多次。

因此,这个问题 主要 是因为你正在使用 GHCi。在“正常”的 Haskell 中,它仍然可能发生,但默认设置和良好的实践(比如为所有顶层绑定提供类型签名)使它在实践中成为一个很小的问题。

Haskell 是永恒的

这是初学者在学习 Haskell 中应用严格性时的一个极其常见的错误,所以你不是唯一一个犯这个错误的人!我将在这里谈论 seq,但相同的概念也适用于 $! 和严格性模式;例如,f !x = (x, x) 基本上只是 f x = x seq (x, x) 的一个方便的缩写。

baz = bar seq bar 并没有做你认为它在做的事情。实际上,它什么都没做,它等同于简单的 baz = bar

原因是 seq 并不会导致现在的评估,而之所以如此并不是由于某种古怪的技术性问题,而是因为Haskell 中根本没有“现在”的概念。这是纯声明性语言与命令性语言之间的根本差异。命令性代码是一系列按顺序执行的步骤,因此时间的概念与语言本身相关;语言中存在一个“现在”,它在代码中“流动”。纯声明性代码不是一系列步骤,而是一组定义。没有“现在”,因为没有“时间”;一切都只是存在baz = bar seq bar 是对 baz 的定义,而不是要执行的步骤;没有“之前”和“之后”来定义 seq 可以导致 bar 被评估的“现在”。

摆脱代码与时间概念固有联系的这一思想是初学 Haskell 的命令式程序员需要

英文:

There are a few different things going on here, unfortunately.

Type-class polymorphism

Joseph Sible's answer pointed this out, but I think there are a few more details it's worth covering. If you type x = 1 + 1 into GHCi, x will have type Num a => a; it's polymorphic, and the type variable has a constraint on it. At runtime such values are represented as a function, with a parameter that is basically the Num instance for the type you want to use (the runtime instance is called a "dicitonary").

This is necessary because you could type print (x :: Integer) and GHCi has to figure out what value 1 + 1 is for the Integer type, or print (x :: Complex Double) and the compiler has to figure out what 1 + 1 is for the Complex Double type; both of those types use a different definition of + (and of the fromInteger function that says what integer literals mean). You can even create a wholly new type after defining x, give it a Num instance, and then GHCi still has to be able to print (x :: MyNewNumType). All of that basically means that x :: Num a => a cannot be evaluated once-and-for-all when you define it; instead it is stored as a function, and every time you use it as some particular type it gets evaluated again, so that its definition can use the + and fromInteger functions appropriate to the type at this usage.

This is typically not a huge problem in "normal" Haskell, written in modules rather than entered into the GHCi prompt. In normal Haskell the compiler will be able to analyse the full module to see where x is used, so it will often be able to infer a particular type for x (i.e. if somewhere else in your module you use x to index into a list with the !! operator, that only accepts Int as the index, so x will be inferred as type Int, not as the more generic Num a => a). And if that fails because none of the usage sites in the module require x to be a specific type, then there is also the monomorphism restriction that will force x to be assigned a concrete type anyway, invoking the defaulting rules, which would pick Integer in this case. So normally a definition will only end up with a type like Num a => a if it specifically has a type signature requesting that type. (The monomorphism restriction applies to bindings of bare variables without any function arguments, so it will apply to x = 1 + 1, and f = \x -> x + 1, but not to f x = x + 1)

But the monomorphism restriction is disabled in GHCi. It was in fact specifically added to the language to avoid performance problems due to type-class polymorphic values being recomputed on every use instead of cached. But it only produces sane results when applied to entire modules where the compiler can see usage of the variables to assign a correct monomorphic type. In GHCi where you enter things one line at a time, forcing the compiler to pick a monomorphic type for bindings like x after seeing only the definition worked extremely poorly; it would very often assign the wrong type for the usage you intended, then generating spurious type errors when you tried to use it. So eventually the monomorphism restriction was disabled by default in GHCi, making it much more usable but having these performance issues of values being recomputed more than necessary.

So this issue is mostly a problem for you because you're working in GHCi. In "normal" Haskell it still can happen, but default settings and good practice (like giving type signatures to all your top-level bindings) make it a very minor concern in practice.

Haskell is timeless

This is an extremely common mistake when first learning about applying strictness in Haskell, so you're in good company! I'm going to talk about seq here, but the same concepts apply to things like $! and bang patterns; for example f !x = (x, x) is basically just a convenient shorthand for f x = x `seq` (x, x).

baz = bar `seq` bar isn't doing what you think it's doing. In fact it isn't doing anything at all, it's equivalent to simply baz = bar.

The reason is that seq doesn't cause evaluation now, and the reason for that is not some quirky technical thing, it's that there is no such thing as "now" in Haskell. This is the fundamental difference between a pure declarative language and an imperative language. Imperative code is a sequence of steps that are executed in order, so the notion of time is inherent to the language; there is a "now" that "moves through" the code. Pure declarative code isn't a sequence of steps, it's set of definitions. There's no "now" because there's no "time"; everything just is. baz = bar `seq` bar is a definition for what baz is, not a step to execute; there's no "before" and "after" to define the "now" when seq could cause bar to be evaluated.

Moving away from this concept of code being inherently tied to a notion of time is one of the trickiest mental leaps imperative programmers need to make when they come to a pure language like Haskell for the first time (and in a pure but strict language you don't necessarily even need to; it's still reasonably easy to mentally model such code as a sequence of steps, even if I would argue it's still not the best conceptual model). And it's complicated by GHCi because the interpreter obviously does have a notion of time; you enter bindings one at a time, altering the set of definitions that the interpreter is aware of. In normal module Haskell that set of definitions is static but in GHCi it changes over time, inherently defining a "now". But seq is a feature of Haskell, not of GHCi, so it has to make sense without GHCi's notion of time. Even though the set of definitions changes over time in a GHCi session, those definitions themselves still don't have any notion of time; they are written in Haskell, which is timeless.

So without any notion of time seq can't actually control when something is evaluated. What it does instead is control what gets evaluated. Evaluation in Haskell is driven by need; if x is being evaluated and x is defined by x = a + b, then the evaluation of x imposes a need to evaluate a and b (and +; technically + is what immediately is needed and the definition of + will decide whether a and/or b is needed, but for most types' definitions of + both will end up needed). All seq does is add additional "need". In x = a `seq` b, x is defined to be equal to b (because that's what seq returns), so evaluating x will obviously need to evaluate b; the special thing about seq is that it says that evaluating x will also need to evaluate a (and it can do this without knowing anything about a or b). But if we never needed to evaluate x itself then the fact that seq is used in the definition of x won't do anything.

In GHCi's notion of time, x = a `seq` b doesn't evaluate a "now" when that binding is entered. It will evaluate a if x is ever used (in a manner that requires it to be evaluated), which will be after some later entry at the command prompt.

This is how we come to baz = bar `seq` bar just being equivalent to baz = bar. baz = bar `seq` bar means that when baz is evaluated, we will also need to evaluate bar. But baz is equal to bar, so of course we're going to need to evaluate bar by definition!

Similarly bar = foo `seq` (last foo) doesn't help because evaluating bar required evaluating last foo, which is going to have to evaluate foo anyway.

So this is also sort-of an issue that is more of a problem in GHCi than in normal Haskell. In GHCi it's very easy to think in terms of the "now" that is you entering things into the command prompt, and thus have a clear idea in your head of what "evaluate this now" would mean and expect that adding strictness would do that. In normal Haskell it's still easy for beginners to make that mistake, but it's less unclear that the notion of "now" doesn't make much sense and thus hopefully easier to develop better expectations on what seq is able to do.

But we can use time when we have it

There is a quick hack you can use to exploit GHCi's notion of time, though.

If I enter b = not True into GHCi (an example avoiding numbers to not trip over the type class polymorphism complication), then b is defined but it's not evaluated yet. Using strictness (via whether via seq, $!, or anything else) inside the definition of b won't help, because it can only make extra things be evaluated when b is evaluated, not change the fundamental reality that merely defining b doesn't evaluate b to trigger any of that extra strictness.

But after I've defined b I can immediately enter a new command that will cause b to be evaluated. In this case a simple b would work fine, since GHCi printing it will inherently force evaluation. But if I didn't want to actually print b, I could enter this:

b `seq` ()

This tells GHCi to print (), but to do so it first has to evaluate b (without doing anything else with it).

The other thing that can give us a notion of time is IO. IO code is still Haskell code, which means it's technically "timeless". But the interface of IO is designed to model (in timeless Haskell), the things that can be done by interacting with the real world. The real world is definitely not timeless, so that notion of time inherently enters the picture of IO code; effectively the data dependencies of an abstract monad construct a notion of time out of Haskell's timeless semantics, and IO just hooks up execution with the real world so that the constructed time matches up with real world time. This means that a function that means "evaluate this now" does make sense in IO, and indeed there is the function evaluate :: a -> IO a (you have to import it from Control.Exception though). This is using IO's notion of time rather than GHCi's, so this even can be used in module Haskell! But it of course only works in IO code.

So using evaluate bar as a command in GHCi will also work as a way to evaluate bar "right now". (evaluate returns the value though, so GHCi will print it; if you don't want that then bar `seq` () would still be preferable)

You can even incorporate evaluate into the definition of a variable in GHCi to make it evaluated as it is defined, but you have to use the monadic binding <- instead of normal definition with = (exploiting the fact that the GHCi prompt is more-or-less "inside IO").

λ b = not True
b :: Bool
λ :sprint b
b = _
λ b
False
it :: Bool
λ :sprint b
b = False
λ c <- evaluate $ not True
c :: Bool
λ :sprint c
c = False

(Note here I'm using the :sprint GHCi special command; instead of converting a whole value to a string with show, :print and :sprint print out a string intended to show the structure of the value without forcing any evaluation; :sprint just shows _ where an unevaluated value is found, while :print also includes information about its type. These can be very handy for checking whether other commands have caused evaluation; it's more precise than relying on the time it takes for a command to be processed. You have to use them on a variable though, not an expression)

Similarly some other monads like State etc can also be viewed as building a notion of time on top of Haskell's timeless semantics, but they don't allow truly arbitrary side effects like IO does, and their constructed timelines don't necessarily match up with the real world's time since they are evaluated whenever timeless lazy Haskell needs (and thus might be run zero or multiple times); they're more of a simulation of time. So an "evaluate this now" function in a "now" derived from those timelines wouldn't be as useful.

Weak head normal form

Finally, there is still a problem with trying to strictly evaluate foo. Say you use a type annotation to avoid the polymorphism and use an extra foo `seq` () command to evaluate it, like this:

λ foo = (foldl (++) [] (take 5000 $ repeat [10, 123, 323, 33, 11, 345, 23, 33, 23, 11, 987 :: Integer]))
foo :: [Integer]
λ foo `seq` ()
()
it :: ()

This hasn't actually evaluated "all" of foo:

λ :sprint foo
foo = 10 : _

We've only evaluated up to the first element of the list! That's probably not what you wanted.

So far in this post we've just been talking about a value being evaluated without any clarification, like that's a single atomic step. That's true for very simple values like Bools, Ints, Chars, etc. But most values (like lists) have more structure than that, potentially containing many sub-values that can be evaluated or not. So when we say that such a value is being evaluated, there are many possibilities for what we could mean.

In Haskell the "standard" notion of evaluation is always to weak head normal form. That has a fancy technical definition, but basically it means evaluated until we have the outermost data constructor (like the : for lists). This is because most of the "need" in Haskell's need-driven evaluation comes from pattern matching; at minimum the system will need to know what the outermost data constructor is to decide which branch to take in pattern matching (e.g. many list functions will have a case for [] and a case for :). All the values contained in that outermost constructor (e.g. the head and tail of the list) will be left unevaluated if they haven't already been evaluated by something else, until more pattern matching needs to examine those contained values.

So when we say that foo `seq` () is equivalent to a () but with additional need to evaluate foo, evaluating the () only causes foo to be evaluated as far as the very first list cell (it doesn't even evaluate the value inside that list cell, but because it ultimately came from a literal in your source code it was already evaluated, which is why :sprint showed 10 : _ and not _ : _).

DeepSeq

The module Control.DeepSeq has tools for forcing deep evaluation. For example there is deepseq that can be used in the same way as seq, but it forces the left argument to be completely evaluated. There is also force that can be used in combination with evaluate to force deep evaluation "now" (using IO's notion of "now").

This is all based on a type class NFData though (the NF in the name is short for "normal form"; normal form is fully evaluated, while weak head normal form is evaluated to the outermost constructor). You can't deepseq a value of a type that doesn't implement the class. Most standard types already have instances, so this works fine for foo :: [Integer]. But if you're using your own types you will need to give them instances of NFData.

So this will actually define foo and then fully evaluate it (without printing it):

λ foo = (foldl' (++) [] (take 5000 $ repeat [10, 123, 323, 33, 11, 345, 23, 33, 23, 11, 987 :: Integer]))
foo :: [Integer]
λ foo `deepseq` ()
()
it :: ()

But deep evaluation inherently involves a full traversal of the data (you can't evaluate everything without looking at everything). So using deepseq a lot can be very inefficient; if you were going to evaluate it later anyway you're wasting time doing an additional pass over the data (and if you weren't going to use it later then you're forcing the system to spend time running code it could have skipped entirely).

It's useful when you have a computation depending on large structures that will result in only a small structure (but bigger than a single number, or else seq would have been enough to force it); if it would otherwise be left only partially evaluated for a long time, deepseqing it might avoid having to keep the large structure in memory at all, which can be more efficient (even though you've added an extra small pass over the small result structure).

It's probably more commonly used though when the computation might throw an exception (or has side effects smuggled in with unsafePerformIO et al); deepseqing it will force those exceptions or side effects to manifest at a known point, instead of whenever later code happens to need a specific part of the result. This is where the evaluate . force combo comes in particularly useful, since you're forcing it in IO where you have a well-defined "now" and also can handle exceptions.

But generally deepseq is better avoided if you can. It's an important tool to have available for sparing use, not a way of making Haskell behave exactly like strict languages.

答案2

得分: 6

问题在于foobar是多态的。在运行时,=>基本上变成了->,并隐式传递了类型类的字典。这阻止了保存您想要的结果,因为它只会保存函数而不是其结果。如果您将类型限制为像[Int]/Int而不是Num a => [a]/Num a => a,那么它们将只被评估一次,而不是每次都被评估。

至于为什么您试图让它提前评估并没有成功,那是因为无论您在某个定义中使用多少个seq!,您只能通过它们完成“使那个东西在这个东西之前评估”而不能“使那个东西现在评估”。

英文:

The problem is that foo and bar are polymorphic. At runtime, => basically becomes -> and implicitly gets passed a dictionary of the typeclass. This prevents saving the result you want since it's only going to save the function and not its result. If you constrain your types to something like [Int]/Int instead of Num a => [a]/Num a => a, then they'll only be evaluated once instead of every time.

As for why your attempts to get it to evaluate ahead of time aren't working, that's because no matter how many seqs or !s you use in the definition of something, all you can accomplish with them is "make that thing evaluate before this thing does", not "make that thing evaluate right now".

huangapple
  • 本文由 发表于 2023年4月7日 04:27:35
  • 转载请务必保留本文链接:https://go.coder-hub.com/75953508.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定