What is the trade-off between Lazy and Strict/Eager evaluation?

huangapple go评论56阅读模式
英文:

What is the trade-off between Lazy and Strict/Eager evaluation?

问题

这种延迟评估的概念经常被提到,特别是在阅读有关函数式编程、Java流等内容时。

流是延迟的;只有在终端操作启动时才对源数据执行计算,并且仅在需要时才消耗源元素。

Haskell 是延迟的。这意味着除非明确告诉它,Haskell 不会执行函数和计算操作,直到它真正需要显示结果。

根据我理解,如果我有一个数据列表,想要对其执行 N 次操作,延迟评估将只对整个列表进行一次遍历,而不是 N 次。为什么这么有吸引力?在我看来,对单个列表进行 N 次遍历和对列表中包含的每个元素执行 N 次操作会产生相同数量的操作。

我的问题是:

  1. 延迟评估总是好的吗?如果不是,接受它会带来什么权衡?
  2. 如何分析延迟算法的性能?
  3. 延迟评估的一些典型用例是什么?
  4. 程序员是否对此有控制权?我能否在不直接支持延迟评估的语言中创建延迟函数?

能否以一种与编程语言无关的方式回答这些问题,因为我更关心概念而不是特定语言。

英文:

So this concept of lazy evaluation gets thrown around a lot especially when reading about functional programming, java streams, etc.

> Streams are lazy; computation on the source data is only performed when the terminal operation is initiated, and source elements are consumed only as needed.

> Haskell is lazy. That means that unless specifically told otherwise,
> Haskell won't execute functions and calculate things until it's really
> forced to show you a result.

Now the way I have understood this is that if I have a List of data I wish to perform N operations on, lazy evaluation will only ever make 1 pass over the entire list as opposed to N. Why is this so desirable ? It seems to me that making N passes over a single list results in the same number of operations as making 1 pass over the list but performing N operations at each element contained in the list.

My questions are:

  1. Is lazy evaluation always good and if not, what trade off are we making by accepting it ?
  2. How to analyze the performance of lazy algorithms ?
  3. What are some typical use cases of lazy evaluation ?
  4. Does a programmer have any control over this ? Can I make lazy functions in a language which does not support lazy evaluation right outside the box ?

Could someone please answer this in a language agnostic manner since I am more curious about the concept rather than a particular language.

答案1

得分: 11

以下是您要翻译的内容:

  1. Is lazy evaluation always good and if not, what trade off are we making by accepting it ?

    • 惰性求值是否总是好的,如果不是,接受它会带来什么权衡?
  2. How to analyze the performance of lazy algorithms ?

    • 如何分析惰性算法的性能?
  3. What are some typical use cases of lazy evaluation ?

    • 惰性求值的一些典型用例是什么?
  4. Does a programmer have any control over this ? Can I make lazy functions in a language which does not support lazy evaluation right outside the box ?

    • 程序员是否对此有任何控制?在不支持惰性求值的语言中,我是否可以直接创建惰性函数?
英文:

To a certain extent this is a topic you could write a book about, but I think we can give a StackOverflow-sized overview.

A quick note on terminology

Technically speaking, strict-vs-non-strict and eager-vs-lazy are two different distinctions talking about different things. Strictness is technically a property of program semantics, used when we're talking about a level where there is no such thing as actual computers, RAM, evaluation, etc. Whereas lazy evaluation is a strategy for actually evaluating programs, and eagerness is the opposite strategy.

However generally one uses lazy evaluation (at the level of the entire language) for the goal of implementing a non-strict semantics, and if one wants a strict semantics one uses eager evaluation. So lazy and non-strict often get used interchangeably when being less formal, and likewise eager and strict are often used interchangeably.

1. Is lazy evaluation always good and if not, what trade off are we making by accepting it ?

It is absolutely not always good. Lazy evaluation is generally regarded as worse for performance than eager evaluation; it usually involves allocation of memory structures that "remember" the operation for later, which is slower than just doing the operation if you're definitely going to do it anyway.

Naively doing everything lazily typically adds a constant factor overhead over doing exactly the same operations eagerly. The constant factor is mostly small enough to not make a huge difference. But if the operation is very small and would only produce an immediate value (things like a machine integer, rather than a heap-allocated object), then the overhead of laziness is still only a constant factor but that constant factor is quite large relative to the "intrinsic" cost of the operation; if your program is mostly doing this kind of thing then lazy evaluation does make a significant negative difference.

Lazy evaluation also makes it very easy to lose track of the exact order various operations will be executed in. Rather than things being done in the order you wrote the code they are done in an order determined by the data dependencies between operations; things are only executed when their results are needed. Often this "need" is determined by code that is very non local.

In pure functional code this often doesn't matter very much, since the results of such code is purely determined by the code you wrote regardless of the order in which various things are executed. In computer science theory, analysing a simple pure lambda calculus, there is a hard mathematical proof that if any evaluation order of a program can produce a well defined result then lazy evaluation will produce that result; eager evaluation may encounter errors or infinite loops that lazy evaluation would avoid. This means you don't a pure functional programmer doesn't have to actually care very much about exactly what order things will be executed in. No matter what execution order they have in their head, if it produces a well-defined result then the actual lazy evaluation will produce the same result, even if the execution order the had in their head is different from actual lazy evaluation. (This is assuming that the language faithfully transmits the properties that were proved of a simple lambda calculus, of course)

In code that has side effects, losing track of the order in which operations will be executed is a nightmare for the programmer. It makes it very easy to make mistakes that are incredibly hard to debug. If two pieces of code will be executed and both change a shared variable, you need to be able to easily and accurately predict the order they will run in order to know the final state of the variable. So programmers writing impure code require a pretty thorough operational understanding of the behaviour of the compiler/interpreter. For this reason you basically never see "all operations are lazy by default" in a language that allows untracked side effects; if these languages support lazy evaluation directly they typically require the programmer to explicitly opt-in to lazy evaluation for parts of their code, and trust the programmer to only do that where it is safe (i.e. where they have written pure code even though the language will not enforce this).

So why do we want lazy evaluation at all?

I've now made it sound like lazy evaluation is always bad. But there are some large caveats. Sometimes lazy evaluation improves performance, or allows an algorithm to work at all.

Frequently this is when a computation makes a pass over a very large data set; lazily evaluated code might be able to process this whole data set without ever needing it all to be resident in memory at once; this can make a massive difference to performance. But sometimes also lazy evaluation just performs its operations in an order that is better for the CPU cache, garbage collector, etc, even when eager evaluation of the same code wouldn't use significantly more memory.

Lazy evaluation also often enables more de-coupled code. Code that produces a data structure can be written in a simple direct style to produce "all" of it, even if that is infinite. Code that consumes the data structure simply examines as much of the structure as it wants to, and by examining it will cause the producer to actually run "just enough" to produce the needed data. So the amount of the data structure that is produced can be made to be exactly as much as the consumer needs, no matter how it determines that, without the producer being aware of the consumer at all.

Under eager evaluation any data structure must be produced in its entirety before a consumer can look at any of it. If that is undesirable (because the structure is very large or takes a very long time to finish), then we need a way for the producer to only produce some of the structure. This usually then involves additional arguments to control how much is produced, may involve additional complexity in the data structure to allow the consumers to differentiate between "this is as far as we've generated so far" and "this is where the data really ends", might need the producer to be capable of resuming production from a previous partial result, etc. This can easily add a lot of complexity to code implementing a fairly simple idea, and the additional complexity often ends up coupling the producer to the consumer much more tightly than a lazy producer and consumer need to be.

That previous discussion might have been a little abstract. As an example, consider a program that produces a move-tree for analysis of a game like chess. A lazy producer can just return a tree of every possible move in every possible position, without knowing anything about what anyone wants to do with it. It might produce a structure Move with the fields player, startingSquare, endingSquare describing the move itself, and another field followOnMoves that is simply a list of every possible Move that could occur after this one; each of those Moves will of course again contain another list of possible follow on moves, and so-on to infinity.

If this was produced by a lazy function, the consumer can just explore the tree without having to know anything about how it was produced. Each of those fields (but most significantly followOnMoves) will not actually exist when the consumer starts running, they will just contain lazy references to the code that needs to be run to populate them, if the consumer ever actually wants to look at them. So if the consumer was doing something like minimax pruning, the producer will just automatically never waste time producing the parts of the tree that the consumer doesn't decide to look at. Several different consumers can exist that do different things with the same data structure, causing the same single producer code to generate different parts of the tree automatically. Which parts of the tree are needed can even be determined interactively by a human user! The producer and consumer implementations can be very independent from one another; basically all they share is the definition of that simple data type Move.

An eager producer simply can't return Move tree like this as it is essentially infinite (I think under some competition rules chess technically isn't infinite as there's a limit on the number of times a position can be repeated, but the whole tree is still impractically vast). Either it has to return a small portion of the move tree (which means it needs to know what kinds of portions are useful to the consumer, essentially embedding the consumer logic into the producer), or it has to expose various functions that only performs single-steps and the consumer now is responsible for calling those single-step functions when it wants more data (essentially embedding the producer logic into the consumer).

Either way, the two sides may have to know a lot more about the implementation of the other, in order to cooperate on the strategy for generating data as-and-when it is needed. You can design good solutions to this problem that still leave the eager producer and eager consumer reasonably decoupled, but designing a good interface that is flexible enough for all uses while still being performant can be a tricky problem, and it can happen quite a lot that the it simply isn't a problem you need to think about when your code is lazily evaluated.

2. How to analyze the performance of lazy algorithms ?

This part I really don't think I can sum up well.

Basic big-O complexity analysis still works, and doesn't even change very much if the computation isn't very fundamentally using laziness. If the operations performed are exactly the same regardless, just in a different order, you can just do the same big-O analysis you would do if the code was strictly evaluated. (Big-O complexity doesn't account for effects like cache locality, extra memory for thunks, or running out of memory, of course)

When the algorithm more fundamentally relies on laziness (and on things not being executed at all if they're not needed), then this won't work of course. But I don't think I can do that topic justice here, any more than I could explain "how to analyze the performance of eager algorithms" in a single post.

3. What are some typical use cases of lazy evaluation ?

This is far too broad. How would you answer "what are some typical use cases of eager evaluation?" The answer to both is really "all the typical use cases of programming in general". Everything task can be implemented by both, but some things are just done differently when you're working with eager or lazy evaluation; you would choose different algorithms to implement the task.

However as I've mentioned above, one general thing I can say is that lazy evaluation can be particularly ergonomic in cases where an eager algorithm needs a lot more code to explicitly manage when and how much of a very large data set is in memory at once.

Lazy evaluation is also critical for a lot of control structures, in any language. For example, if/then/else wouldn't be very useful if the then and else parts were always evaluated before you could even start executing the conditional selection logic. So almost every language has this very limited sort of "laziness" built in for a few specific parts of the syntax. But in a language where everything is lazy you can make your own control structures. In Haskell things analogous to while loops and for-each loops can be simply implemented as ordinary library code, with no need for the compiler to specially implement them. So this is another "typical use case" that stands out in comparison to eager evaluation.

4. Does a programmer have any control over this ? Can I make lazy functions in a language which does not support lazy evaluation right outside the box ?

If you have first-class functions (or other features that can simulate them) then you can always simulate lazy evaluation. Instead of relying on the runtime system implicitly creating a thunk (which is what we call the in-memory record of an operation that will be run later when required), you can just explicitly store a function that will generate the value later and explicitly call it when required. It takes a little more finesse to ensure that such a function is only ever run to produce the value once, no matter how many references there may be - but that too can be done. Some languages even have enough flexibility to wrap all this up in an interface that makes it look like you're just using values as normal, keeping the thunk-functions under the hood.

Languages with lazy-by-default evaluation also typically allow the programmer to explicitly make certain things eager. A lazy language aiming for good performance will also often have an optimising compiler that aims to detect when an operation does not benefit from laziness and perform it eagerly instead. Haskell, for example, promises you a non-strict semantics by default, and we usually think of it as using lazy evaluation to achieve that, but it actually does a lot of optimisation and will evaluate lots of your code eagerly; it just promises not to do so where that could change the result of your code, and tries not to do it where it would make your code slower.

So whether you're working in a lazy-by-default language or an eager-by-default language, you would have some ability to opt-in to the other evaluation strategy (though with varying amounts of effort required).

答案2

得分: 5

现在我理解的方式是,如果我有一个数据列表,我希望对其执行N个操作,惰性评估只会对整个列表进行一次遍历,而不是N次。

我想在某些特定情况下你可以这么看待,但这绝对不是对惰性评估的一般性描述。这里似乎存在一些误解:

我有一个数据列表

如果你已经有一个数据列表,比如从文件中读取的,那么这与惰性和严格语言之间实际上没有太大区别。无论你对其进行多少次遍历,该列表都将一直存在于内存中。

惰性评估只会对整个列表进行一次遍历

一般情况下绝对不正确。如果你在列表上映射两个不同的函数,那么通常需要两次独立的遍历。原则上编译器可以重新排列这个过程,将两次遍历合并成一次,实际上GHC有时会做类似的事情,但这与惰性评估没有太大关系。

真正正确的说法是,如果你通过对现有列表进行映射来定义一个新列表 l',那么对 l' 的N次访问将仅需要一次映射操作。但这在严格语言中也是一样的。唯一的区别是,在严格语言中,遍历会立即在你编写 map 的地方发生,而在惰性语言中,它会等到结果首次被需要时才发生。所以,“与N相对”并没有太多意义。在严格语言中也只有一次遍历,例如在Python中:

l = someListOfData
l2 = map(f, l)

其中前提成立的情况是,在严格语言中,你明确延迟了评估,使用类似以下的方式:

l = someListOfData
l2 = lambda: map(f, l)

这是手动的“惰性”,但当某人需要 l2() 时,Python会一遍又一遍地执行 map

惰性评估总是好的吗,如果不是,我们接受它会有什么权衡?

惰性评估是一种工具。当适当时使用它时,它总是好的。对于给定的代码片段,拥有惰性评估并不总是更好。

对于强烈简化的对比,权衡在于这个方面:惰性将表征语义(一个值应该是什么 - 如果它确实需要)与操作语义(何时,或者是否,实际计算该值)解耦。在许多情况下,你真正感兴趣的是表征语义,所以对于专注于表征语义的情况来说,具有惰性的语言是不错的。但反过来,计算仍然需要在真实计算机上进行,需要真实的CPU时间,尤其是实际的内存,当涉及到惰性时,通常更难理解和提供保证。

Ben对更多方面和你的其他问题进行了很好的讨论,所以我就到此为止。

† It's worth noting that Haskell traditionally did also lazy IO in addition to lazy evaluation, i.e. you could read a file but the runtime would only actually read from disc as the elements were required. This is however widely recognized as bad now, and modern Haskell libraries don't encourage this anymore.

英文:

> Now the way I have understood this is that if I have a List of data I wish to perform N operations on, lazy evaluation will only ever make 1 pass over the entire list as opposed to N.

I suppose you could see it this way in some specific cases, but this is definitely not a good characterization of lazy evaluation in general. There seem to be some misunderstandings here:

> I have a List of data

If you already have a list of data, say, read from a file, then then this isn't really any different between a lazy and a strict language. In both cases, the list will just be there in memory, regardless of how many passes you make over it.<sup>†</sup>

> lazy evaluation will only ever make 1 pass over the entire list

Definitely not true in general. If you map two different functions over a list, then this will in general require two separate passes over the list. In principle the compiler could reorder this, fuse both passes into one, and indeed GHC sometimes does this sort of thing, but that doesn't really have much to do with lazy evaluation.

What is true is that if you define a new list l&#39; by mapping a function over an existing one, then N accesses to l&#39; will only require one pass of the mapping operation. But that's again exactly the same as in a strict language. The only difference is that in a strict language, the pass would happen right there where you write map, whereas in a lazy one it would wait until the results are needed for the first time. So,

> as opposed to N

doesn't really make sense. In a strict language it's also just one pass, e.g. in Python with

    l = someListOfData
    l2 = map(f, l)

Where the premise does become true is when, in a strict language, you explicitly delay the evaluation, by using something like

    l = someListOfData
    l2 = lambda: map(f, l)

This is manual “lazyness”, but Python would make the map pass over and over again when somebody requires l2().

> Is lazy evaluation always good and if not, what trade off are we making by accepting it?

Lazy evaluation is a tool. It's always good if you use it when it's appropriate. It's not always better to have lazy evaluation for a given piece of code.

For a strongly simplified contrast, the tradeoff hinges around this: laziness decouples the denotational sematics (what a value should be – if it is ever needed) from the operational semantics (when, or indeed if, that value is ever computed at all). Denotational is in many cases what you're really interested in, so a lazy language is nice for focusing on that.<br>But the flip side is that the computations still need to happen on a real computer, with real CPU time and in particular real memory, and reasoning about that and making guarantees often becomes more difficult when lazyness is involved.

Ben gave a great discussion of more aspects and your other questions, so I'll leave it at that.

<hr>

<sup>†</sup><sub>It's worth noting that Haskell traditionally did also lazy IO in addition to lazy evaluation, i.e. you could read a file but the runtime would only actually read from disc as the elements were required. This is however widely recognized as bad now, and modern Haskell libraries don't encourage this anymore.</sub>

答案3

得分: 3

I’ll try to summarise briefly and in a language-agnostic way.

Is lazy evaluation always good and if not, what trade off are we making by accepting it?

不,这是一种时间和空间的权衡。

在急切求值(eager evaluation)中,你将整个值传递给一个函数的输入,然后它从输出中推出整个值。

这会产生多余的输出,因为函数不知道你将需要什么。如果你不使用全部的输出,这会浪费(可能大量的)空间,还会浪费一些时间来计算以后不会需要的数据。为了避免过度消耗资源,你需要将数据流转换为显式的控制流(例如,使用生成器而不是列表)。

在惰性求值(lazy evaluation)中,你从函数的输出中取出一个子值,然后它将一个子值放入其输入中。

这无法避免过多保留输入(和捕获的变量),因为你不知道函数会需要什么。如果你使用全部的输出,那么推迟工作就是浪费空间:你本可以更早地开始计算它,而不是分配一个“thunk”。为了避免过度消耗资源,你需要将控制流转换为显式的数据流(例如,在Haskell中使用seq或各种语法糖)。

How to analyze the performance of lazy algorithms?

使用银行家算法。Chris Okasaki的《纯函数数据结构》一书中有一章详细描述了这一方法。

在急切求值中,你在代码中累积时间成本,因为只有在付出完整的计算成本后才能获得数据结构。而在惰性求值中,你将时间成本累积在数据结构中:你可以立即获取数据结构,但每个延迟的计算都是一种“债务”,必须支付才能使用它。

What are some typical use cases of lazy evaluation?

你可以编写可读的数据流,使用常规数据类型,并获得所需的自动控制流,以实现增量计算和缓存。

它还在与引用透明性一起为你提供等式推理。我无法过分强调这对与同事进行沟通的好处。如果你编写了一些代码X,而我可以轻松地证明X = Y,并且Y在某些方面更好,那么即使我不知道它是如何工作的,我也可以绝对有信心建议使用Y。

Can I make lazy functions in a language which does not support lazy evaluation right outside the box?

根据编程语言的不同,你可以对其进行编码,但结果代码通常不够清晰。评估策略是语言的一个深层次方面,它对你使用该语言进行问题解决的方式有很大影响。

英文:

I’ll try to summarise briefly and in a language-agnostic way.

> Is lazy evaluation always good and if not, what trade off are we making by accepting it ?

No—it’s a space–time tradeoff.

In eager evaluation, you push in a whole value into a function’s input, and it pushes out a whole value from its output.

This can’t avoid producing extra output because the function doesn’t know what you will need. If you don’t use all of the output, this wastes (potentially a lot of) space, and also wastes some time computing the data that later won’t be needed. To avoid overspending, you need to convert dataflow into explicit control-flow (e.g., generators instead of lists).

In lazy evaluation, you pull out a subvalue from a function’s output, and it pulls in a subvalue into its input.

This can’t avoid over-retaining input (and captured variables), because you don’t know what the function will need. If you do use all of the output, then delaying the work was a waste of space: you could have started computing it earlier, instead of allocating a thunk. To avoid overspending, you need to convert control flow into explicit dataflow (e.g., in Haskell, using seq, or various syntactic sugar for that).

> How to analyze the performance of lazy algorithms ?

The Banker’s Method. There’s a chapter of Purely Functional Data Structures by Chris Okasaki that describes this in detail.

In eager evaluation, you tally up time costs in code, because you only get back a data structure after you pay the entire price to compute it. In lazy evaluation, you tally up time costs in data structures instead: you can get the data structure straight away, but each delayed computation is a “debt” that must be paid to use it.

> What are some typical use cases of lazy evaluation ?

You can write nice readable dataflow, with normal data types, and get the automatic control-flow needed to give you some incremental computation and caching.

It also gives you equational reasoning in conjunction with referential transparency. I can’t overstate the benefits of this for communication with coworkers. If you write some code X, and I can easily prove that X = Y, and Y is better in some way, then I can be absolutely confident about suggesting Y, even if I don’t know how it works.

> Can I make lazy functions in a language which does not support lazy evaluation right outside the box ?

Depending on the language, you can encode it, but the resulting code is often less clear. Evaluation strategy is a deep aspect of a language, and it has a big effect on your approach to problem-solving using that language.

huangapple
  • 本文由 发表于 2023年3月9日 12:39:40
  • 转载请务必保留本文链接:https://go.coder-hub.com/75680491.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定