二进制.Uvarint得到的整数值与预期的不符。

huangapple go评论86阅读模式
英文:

binary.Uvarint got wrong int value than expected

问题

我有一个长度为4的切片,存储了一个整数值,如下所示:

[159 124 0 0]

然后我尝试使用val, encodeBytes := binary.Uvarint(slice),但是得到了错误的val

val = 15903, encodebytes = 2

正确的val应该是31903,这是怎么回事?

以下是代码:

http://play.golang.org/p/kvhu4fNOag

英文:

I have a 4 length slice stored an int value like this:

[159 124 0 0]

Then I tried val, encodeBytes := binary.Uvarint(slice), but got wrong val:

val = 15903, encodebytes = 2

The correct val should be 31903, what's wrong with it?

Here is the code:

http://play.golang.org/p/kvhu4fNOag

答案1

得分: 1

例如,

package main

import (
    "encoding/binary"
    "fmt"
)

func main() {
    bytes := []byte{159, 124, 0, 0}
    integer := int(binary.LittleEndian.Uint32(bytes))
    fmt.Println(bytes, integer)
}

输出:

[159 124 0 0] 31903
英文:

For example,

package main

import (
	"encoding/binary"
	"fmt"
)

func main() {
	bytes := []byte{159, 124, 0, 0}
	integer := int(binary.LittleEndian.Uint32(bytes))
	fmt.Println(bytes, integer)
}

Output:

[159 124 0 0] 31903

答案2

得分: 1

从预期结果来看,你似乎正在尝试解码一个小端字节序的32位整数。然而,binary.Uvarint函数并不适用于这个任务,因为它用于解码Protocol Buffers规范中使用的可变长度整数编码。

相反,你可以尝试使用binary.LittleEndian.Uint32()函数:

val := binary.LittleEndian.Uint32(slice)
英文:

From the expected result, it sounds like you're trying to decode a little endian 32-bit integer. The binary.Uvarint function is the wrong one for the job though, since it decodes the variable length integer encoding used by the Protocol Buffers specification.

Instead, try using binary.LittleEndian.Uint32():

val := binary.LittleEndian.Uint32(slice)

答案3

得分: 1

根据二进制包文档,用于解释字节序列的编码是此处提供的编码。它指定字节的解释方式如下:

除了最后一个字节外,varint 中的每个字节都有最高有效位(msb)设置 - 这表示还有更多的字节要到来。
每个字节的低 7 位用于以 7 位一组的补码形式存储数字,最低有效组在前。

[159 124 0 0] 的二进制表示为:

1001  1111 , 0111 1100 , 0000  0000, 0000 0000

第一个字节的最高有效位(MSB)被设置,因此第二个字节也会被解释。第二个字节的最高有效位未设置,因此忽略剩余的字节。

通过去掉解释的字节的最高有效位,我们得到以下位:

001 1111 , 111 1100 

在将这两组位解释为数字之前,它们被反转:

111 1100 , 001 1111

连接起来:
0011 1110 0001 1111

将其转换回十进制,我们得到:

1 + 2 + 4 + 8 + 16 + 0 + 0 + 0 + 0 + 512 + 1024 + 2048 + 4096 + 8192 = 15903

正如 James 和 peterSO 的帖子所指出的,你可能想使用 binary.LittleEndian.Uint32

英文:

According to the binary package documentation the encoding used to interpret the byte sequence is the one available here. It specifies that bytes are interpreted as:

> Each byte in a varint, except the last byte, has the most significant
> bit (msb) set – this indicates that there are further bytes to come.
> The lower 7 bits of each byte are used to store the two's complement
> representation of the number in groups of 7 bits, least significant
> group first.

The binary representation of [159 124 0 0] is:

1001  1111 , 0111 1100 , 0000  0000, 0000 0000

The most significant bit (MSB) of the first byte is set so the second byte will also be interpreted. The second bytes MSB is not set so the remaining bytes are ignored.

By dropping the MSB of the bytes interpreted the bits we get:

001 1111 , 111 1100 

These two groups are then reversed before being interpreted as the number:

111 1100 , 001 1111

concatenated:
0011 1110 0001 1111

Converting this back to decimal we get:

1 + 2 +4 + 8 + 16 + 0 + 0 + 0 + 0 + 512 + 1024 + 2048 + 4096 + 8192 = 15903

As James' and peterSO's post indicates you probably want to use binary.LittleEndian.Uint32 instead.

huangapple
  • 本文由 发表于 2014年2月18日 17:49:47
  • 转载请务必保留本文链接:https://go.coder-hub.com/21849857.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定