英文:
Why does tf.train.FloatList have rounding errors?
问题
以下代码显示,当将Python浮点数转换为tf.train.FloatList
时,会丢失精度。我之前的理解是,无论是原生的Python还是tensorflow,都将其存储为float64。那么为什么会有差异呢?
import tensorflow as tf
x = 2.3
lst = tf.train.FloatList(value=[x])
reloaded = lst.value[0] # 2.299999952316284
英文:
The following code shows that when converting a python float to a tf.train.FloatList
, then we lose precision. My understanding was that both native python and tensorflow store it as float64. So why the difference?
import tensorflow as tf
x =2.3
lst = tf.train.FloatList(value=[x])
reloaded = lst.value[0] # 2.299999952316284
答案1
得分: 1
一个 FloatList
包含 float
值 - 就是说,协议缓冲区中的 float
,它是32位的。如果你查看 FloatList
的 文档,你会看到 value
被定义为
repeated float value
这意味着 value
字段包含0个或多个32位的协议缓冲区 float
值。
如果是64位的浮点数,它会这样写
repeated double value
但它并没有这样写。
英文:
A FloatList
contains float
s - as in, protocol buffer float
, which is 32-bit. If you look at the FloatList
documentation, you'll see that value
is defined as
repeated float value
which means the value
field contains 0 or more 32-bit protobuf float
values.
If it was 64-bit floats, it would say
repeated double value
but it doesn't say that.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论