为什么tf.train.FloatList存在舍入误差?

huangapple go评论77阅读模式
英文:

Why does tf.train.FloatList have rounding errors?

问题

以下代码显示,当将Python浮点数转换为tf.train.FloatList时,会丢失精度。我之前的理解是,无论是原生的Python还是tensorflow,都将其存储为float64。那么为什么会有差异呢?

import tensorflow as tf

x = 2.3
lst = tf.train.FloatList(value=[x])
reloaded = lst.value[0]  # 2.299999952316284
英文:

The following code shows that when converting a python float to a tf.train.FloatList, then we lose precision. My understanding was that both native python and tensorflow store it as float64. So why the difference?

import tensorflow as tf

x =2.3
lst = tf.train.FloatList(value=[x])
reloaded = lst.value[0]  # 2.299999952316284

答案1

得分: 1

一个 FloatList 包含 float 值 - 就是说,协议缓冲区中的 float,它是32位的。如果你查看 FloatList文档,你会看到 value 被定义为

repeated float value

这意味着 value 字段包含0个或多个32位的协议缓冲区 float 值。

如果是64位的浮点数,它会这样写

repeated double value

但它并没有这样写。

英文:

A FloatList contains floats - as in, protocol buffer float, which is 32-bit. If you look at the FloatList documentation, you'll see that value is defined as

repeated float value

which means the value field contains 0 or more 32-bit protobuf float values.

If it was 64-bit floats, it would say

repeated double value

but it doesn't say that.

huangapple
  • 本文由 发表于 2023年3月9日 18:26:44
  • 转载请务必保留本文链接:https://go.coder-hub.com/75683306.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定