英文:
Weird behaviour in GDB when printing IEEE Format Double Precision
问题
我目前正在学习在x86上使用GNU gdb(Ubuntu 9.2-0ubuntu1~20.04.1)查看帧的堆栈布局。当我尝试操作IEEE754双精度值时,遇到了一些我无法解释的行为。目前我的堆栈上有以下数值。
(gdb) x/2wx 0xffffca74
0xffffca74: 0x9ba5e354 0x400920c4
也就是说,0xffffca74到0xffffca78包含一个IEEE754双精度值,其十六进制表示为0x400920C49BA5E354,对应十进制为3.141。
现在我尝试在gdb中打印这个浮点数的值,得到了以下输出。
(gdb) x/f 0xffffca74
0xffffca74: -2.74438676e-22
(gdb) x/f 0xffffca74
0xffffca74: -2.74438676e-22
(gdb) x/b 0xffffca74
0xffffca74: 84
(gdb) x/f 0xffffca74
0xffffca74: 3.141
因此,起初,GDB将0xffffca74处的值视为IEEE754单精度数。但在打印该位置的一个字节并再次运行命令后,它突然正确地将其解释为双精度数?它是如何做到的,它是否具有某种自动类型识别功能?
我尝试在文档中找到关于这一行为的信息,但遗憾的是没有找到任何相关信息。我只会在明确查询一个大字时才期望它打印出正确的结果。
英文:
I am currently learning about the stack layout of frames on x86 using GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1). I ran into some behaviour that I cannot explain myself when playing around with IEEE754 double precision values. Currently I have the following values on my stack.
(gdb) x/2wx 0xffffca74
0xffffca74: 0x9ba5e354 0x400920c4
I.e. 0xffffca74 to 0xffffca78 contains a IEEE754 Double Precision value with 0x400920C49BA5E354_16 == 3.141_10. Now I tried to print the value of the float in gdb and got the following outputs.
(gdb) x/f 0xffffca74
0xffffca74: -2.74438676e-22
(gdb) x/f 0xffffca74
0xffffca74: -2.74438676e-22
(gdb) x/b 0xffffca74
0xffffca74: 84
(gdb) x/f 0xffffca74
0xffffca74: 3.141
So at first, GDB considered the value in 0xffffca74 as a IEEE754 single precision number. But after printing one single byte in that location and running the command again, it suddenly correctly interprets it as a double precision number? How does it do that, does it have some sort of automatic type recognition?
I've tried finding some information about it in the documentation, but unfortunately got no information on this behaviour. I would have only expected it to print the correct result, when explicitly querying a giant word.
答案1
得分: 2
GDB的x命令文档中提到需要指定一个大小(b、h、w、g)和格式(o、x、d、u、t、f、a、i、c、s、z)参数。这些参数在连续的x命令中会"粘滞",如果你省略了大小或格式,将会重复使用上一次指定的值。
但是有一些未记录的行为。并非所有大小和格式的组合都是合理的。特别是,'f' 仅支持 'w' 和 'g' 大小。如果当前的大小不是 'w' 或 'g',GDB 会将其更改为 'g'。(这是在函数 printcmd.c:decode_format
中完成的。)
(gdb) x/wx 0x555555558010
0x555555558010: 0x9ba5e354
(gdb) x/f 0x555555558010
0x555555558010: -2.74438676e-22 # 32位浮点数
(gdb) x/bx 0x555555558010
0x555555558010: 0x54
(gdb) x/x 0x555555558010
0x555555558010: 0x54
(gdb) x/f 0x555555558010
0x555555558010: 3.141 # 64位浮点数
(gdb) x/x 0x555555558010
0x555555558010: 0x400920c49ba5e354 # 默认大小被更改为 'g'
英文:
GDB's x command is documented as taking a size (b, h, w, g) and format (o, x, d, u, t, f, a, i, c, s, z) argument. They are "sticky" from one x command to the next; if you omit a size or format, the last one specified is reused.
But there's some undocumented behavior. Not all combinations of size and format make sense. In particular, 'f' is only supported for the 'w' and 'g' sizes. If the current size is not 'w' or 'g', GDB will change it to be 'g'. (This is done in the function printcmd.c:decode_format
.)
(gdb) x/wx 0x555555558010
0x555555558010: 0x9ba5e354
(gdb) x/f 0x555555558010
0x555555558010: -2.74438676e-22 # 32-bit float
(gdb) x/bx 0x555555558010
0x555555558010: 0x54
(gdb) x/x 0x555555558010
0x555555558010: 0x54
(gdb) x/f 0x555555558010
0x555555558010: 3.141 # 64-bit float
(gdb) x/x 0x555555558010
0x555555558010: 0x400920c49ba5e354 # default size was changed to 'g'
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论