如何使z3浮点解不随机化

huangapple go评论53阅读模式
英文:

How do I make z3 float solution not random

问题

I am currently doing regression testing so I need the fixed results rather than random ones. I have tried all the methods including setting the random seed to 0, set_option('smt.arith.random_initial_value', False) and others. But nothing changes. The float solutions still seem random.

Here is the simple example:

from z3 import *
set_option('smt.arith.random_initial_value', False)

set_option('smt.random_seed', 0)

def test_short_expont():
    float_type = Float32()
    solver = Solver()
    solver.set("randomize", False)
    fp_5 = FP('fp_5', float_type)
    fp_10 = FP('fp_10', float_type)
    
    solver.add(fpEQ(fpAdd(RNE(), fp_5, fp_5), fp_10))
    
    if solver.check() == sat:
        model = solver.model()
        return model
    else:
        return

I expect the solutions to remain the same, but every time I run it, I get different results.

[fp_10 = 1.3322999477386474609375*(2**-112),
 fp_5 = 1.3322999477386474609375*(2**-113)]

[fp_10 = -1.66065156459808349609375*(2**1),
 fp_5 = -1.66065156459808349609375]

[fp_10 = 1.2059905529022216796875*(2**-126),
 fp_5 = 0.60299527645111083984375*(2**-126)]

It seems that fixing the random seed is effective for the integer solutions, but it doesn't have an influence on the floating-point solutions.

Is there any method to make the floating-point solution not random?

英文:

I am currently doing regression testing so I need the fixed results rather than random ones. I have tried all the methods including setting the random seed to 0, set_option('smt.arith.random_initial_value',False) and others. But nothing changes. The float solutions still seem random.
Here is the simple example:

from z3 import *
set_option('smt.arith.random_initial_value',False)

set_option('smt.random_seed', 0)

def test_short_expont():
    float_type = Float32()
    solver = Solver()
    solver.set("randomize",False)
    fp_5 = FP('fp_5',float_type)
    fp_10 = FP('fp_10',float_type)
    
    solver.add( fpEQ(fpAdd(RNE(), fp_5, fp_5),fp_10) )
    #solver.add( fpEQ(fpMul(RNE(), fp_5, fp_10),fp_5) )
    if solver.check() == sat:
        model = solver.model()
        return model
    else:
        return
    

I expects the solutions maintain same but every time I run there comes different results.

[fp_10 = 1.3322999477386474609375*(2**-112),
 fp_5 = 1.3322999477386474609375*(2**-113)]

[fp_10 = -1.66065156459808349609375*(2**1),
 fp_5 = -1.66065156459808349609375]

[fp_10 = 1.2059905529022216796875*(2**-126),
 fp_5 = 0.60299527645111083984375*(2**-126)]

It seems that fixing the random seed is effective for the int solutions, but don't have influence on the float solutions.

Is there any method to make the float solution not random?

答案1

得分: 1

在这里的问题不是z3,而是Python。如果在调用solver.check()之前添加以下代码行:

    print(solver.sexpr())

并且在SMTLib程序中添加check-sat,你将会得到:

(declare-fun fp_10 () (_ FloatingPoint 8 24))
(declare-fun fp_5 () (_ FloatingPoint 8 24))
(assert (fp.eq (fp.add roundNearestTiesToEven fp_5 fp_5) fp_10))
(check-sat)
(get-model)

如果你直接使用z3运行上述代码,你将看到它产生完全相同的输出。实际上,如果你从头开始运行你的Python程序,它也会生成相同的输出;但是如果你重复调用test_short_expont,就不会得到相同的结果,这就是你观察到的现象。

在GitHub的z3跟踪器上有一个相关问题,参见:https://github.com/Z3Prover/z3/issues/6679

总结:如果你想要确定性,不要使用Python。或者,你可以使用Python生成SMTLib,然后保存它,并直接在z3上调用它。(当然,你可以使用Python调用外部程序z3。)否则,你将无法得到确定性。

我应该补充说,构建像这样的回归测试时还有另一个非确定性的来源:如果你升级z3会发生什么。(例如,从GitHub主分支编译。)很抱歉,这是不可避免的;也就是说,根据你使用的z3版本,你将得到不同的结果。在这种情况下,唯一获得确定性的方法是确保你的测试只有一个可能的满足模型。(当然,那就违背了测试的初衷。)

在这里最好的想法是不依赖于确定性,而是让你的测试套件获取模型,并在Python端验证它是否是有效的模型。这样,即使模型不同,你也可以放心它仍然是正确的。当然,如果不知道你的确切需求/设置,这可能会是个高难度的任务。

英文:

The problem here isn't z3, but rather python I'm afraid. If you add the following line:

    print(solver.sexpr())

right before you call solver.check(), and grab the SMTLib program and add a check-sat, on it, you get:

(declare-fun fp_10 () (_ FloatingPoint 8 24))
(declare-fun fp_5 () (_ FloatingPoint 8 24))
(assert (fp.eq (fp.add roundNearestTiesToEven fp_5 fp_5) fp_10))
(check-sat)
(get-model)

If you run the above using z3 directly, you'll see that it produces the exact same output. In fact, your Python program will generate the same output if you start it from scratch as well; but not if you call test_short_expont repeatedly, which is what you are observing.

There was a related question on GitHub z3 tracker, see: https://github.com/Z3Prover/z3/issues/6679

Bottom line: If you want determinism, don't use python. Or, you can use Python to generate the SMTLib, but then save that, and call z3 directly on it. (You can of course use Python to do the call to the external program z3 if you like.) Otherwise, you won't get determinism.

I should add that there is another source of non-determinism that comes with building regressions like this: What happens if you upgrade z3. (For instance, compiling from the GitHub master.) That, I'm afraid, is unavoidable; i.e., you'll have different results based on which version of z3 you use. The only way to get deterministic in that scenario is to arrange for your test to have only one possible satisfying model. (Of course, that beats the whole purpose of testing.)

The best idea here is not to rely on determinism, but rather have your test-suite get the model, and check that it is a valid model on the Python side. That way even if the model is different, you can rest assured it's still doing the right thing. Of course, without knowing your exact requirements/setup, that might be a tall order.

huangapple
  • 本文由 发表于 2023年4月17日 11:21:50
  • 转载请务必保留本文链接:https://go.coder-hub.com/76031510.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定