base64解码器(io.Reader实现)的异常行为

huangapple go评论101阅读模式
英文:

base64 decoder (io.Reader implementation) misbehaviour

问题

我尝试在一个for循环中重新声明/赋值一个base64解码器,并在循环结束前使用os.Seek函数将文件返回到开头,以便被调用的函数(在这个测试案例中是PrintBytes)能够在整个for循环中一次又一次地从开头到结尾处理文件。

以下是我的(我确定非常不符合惯用法的)代码,在main()函数的主for循环的第二次迭代中,无法将第二个字节读入长度为2、容量为2的[]byte中:

package main

import (
	"encoding/base64"
	"io"
	"log"
	"net/http"
	"os"
)

var (
	remote_file string = "http://cryptopals.com/static/challenge-data/6.txt"
	local_file  string = "secrets_01_06.txt"
)

func main() {
	f, err := os.Open(local_file)
	if err != nil {
		DownloadFile(local_file, remote_file)
		f, err = os.Open(local_file)
		if err != nil {
			log.Fatal(err)
		}
	}
	defer f.Close()

	for blocksize := 1; blocksize <= 5; blocksize++ {
		decoder := base64.NewDecoder(base64.StdEncoding, f)
		PrintBytes(decoder, blocksize)
		_, err := f.Seek(0, 0)
		if err != nil {
			log.Fatal(err)
		}
	}
}

func PrintBytes(reader io.Reader, blocksize int) {
	block := make([]byte, blocksize)
	for {
		n, err := reader.Read(block)
		if err != nil && err != io.EOF {
			log.Fatal(err)
		}
		if n != blocksize {
			log.Printf("n=%d\tblocksize=%d\tbreaking...", n, blocksize)
			break
		}
		log.Printf("%x\tblocksize=%d", block, blocksize)
	}
}

func DownloadFile(local string, url string) {
	f, err := os.Create(local)
	if err != nil {
		log.Fatal(err)
	}
	defer f.Close()

	resp, err := http.Get(url)
	if err != nil {
		log.Fatal(err)
	}
	defer resp.Body.Close()

	_, err = io.Copy(f, resp.Body)
	if err != nil {
		log.Fatal(err)
	}
}

这段代码的输出可以在这里查看:https://gist.github.com/tomatopeel/b8e2f04179c7613e2a8c8973a72ec085

我不理解的是它的行为:https://gist.github.com/tomatopeel/b8e2f04179c7613e2a8c8973a72ec085#file-bad_reader_log-L5758

我原本期望它只是从开头到结尾每次读取2个字节到这个2字节的切片中。为什么它在这里只读取了1个字节呢?

英文:

I have tried, within a for loop, to re-declare/assign a base64 decoder and used the os.Seek function to go back to the beginning of the file at the end of the loop before this, in order for the called function (in this test case PrintBytes) to be able to process the file from beginning to end time and time again throughout the for loop.

Here is my (I'm sure terribly un-idiomatic) code, which fails to read the 2nd byte into the []byte of length 2 and capacity 2 during the second iteration of the main for loop in main():

package main
import (
&quot;encoding/base64&quot;
&quot;io&quot;
&quot;log&quot;
&quot;net/http&quot;
&quot;os&quot;
)
var (
remote_file string = &quot;http://cryptopals.com/static/challenge-data/6.txt&quot;
local_file  string = &quot;secrets_01_06.txt&quot;
)
func main() {
f, err := os.Open(local_file)
if err != nil {
DownloadFile(local_file, remote_file)
f, err = os.Open(local_file)
if err != nil {
log.Fatal(err)
}
}
defer f.Close()
for blocksize := 1; blocksize &lt;= 5; blocksize++ {
decoder := base64.NewDecoder(base64.StdEncoding, f)
PrintBytes(decoder, blocksize)
_, err := f.Seek(0, 0)
if err != nil {
log.Fatal(err)
}
}
}
func PrintBytes(reader io.Reader, blocksize int) {
block := make([]byte, blocksize)
for {
n, err := reader.Read(block)
if err != nil &amp;&amp; err != io.EOF {
log.Fatal(err)
}
if n != blocksize {
log.Printf(&quot;n=%d\tblocksize=%d\tbreaking...&quot;, n, blocksize)
break
}
log.Printf(&quot;%x\tblocksize=%d&quot;, block, blocksize)
}
}
func DownloadFile(local string, url string) {
f, err := os.Create(local)
if err != nil {
log.Fatal(err)
}
defer f.Close()
resp, err := http.Get(url)
if err != nil {
log.Fatal(err)
}
defer resp.Body.Close()
_, err = io.Copy(f, resp.Body)
if err != nil {
log.Fatal(err)
}
}

The output from this code can be viewed here https://gist.github.com/tomatopeel/b8e2f04179c7613e2a8c8973a72ec085

It is this behaviour that I don't understand:
https://gist.github.com/tomatopeel/b8e2f04179c7613e2a8c8973a72ec085#file-bad_reader_log-L5758

I was expecting it to simply read the file 2 bytes at a time into the 2-byte slice, from beginning to end. For what reason does it only read 1 byte here?

答案1

得分: 0

这不是encoding/base64的问题。当使用io.Reader时,并不能保证读取的字节数恰好等于缓冲区大小(即你示例代码中的blocksize)。文档中说明:

> Read将最多len(p)个字节读入p中。它返回读取的字节数(0 ≤ n ≤ len(p))和遇到的任何错误。即使Read返回的n < len(p),在调用期间它也可能使用p的全部作为临时空间。如果有一些数据可用但不足len(p)个字节,Read通常会返回可用的数据,而不是等待更多数据。

在你的示例中,将PrintBytes修改为:

func PrintBytes(reader io.Reader, blocksize int) {
block := make([]byte, blocksize)
for {
n, err := reader.Read(block)
//如果n > 0,即使err != nil,也处理数据
if n > 0 {
log.Printf("%x\tblocksize=%d", block[:n], blocksize)
}
//检查错误
if err != nil {
if err != io.EOF {
log.Fatal(err)
} else if err == io.EOF {
break
}
} else if n == 0 {
//视为没有发生任何事情
log.Printf("WARNING: read return 0,nil")
}
}
}

更新:

正确使用io.Reader,修改代码以始终在发生错误时处理数据(即使n > 0)。

英文:

It is not the problem of encoding/base64. When using io.Reader, it's not guaranteed that number of bytes read exactly equal to the buffer size (i.e. blocksize in your example code). The documentation states:

> Read reads up to len(p) bytes into p. It returns the number of bytes read (0 <= n <= len(p)) and any error encountered. Even if Read returns n < len(p), it may use all of p as scratch space during the call. If some data is available but not len(p) bytes, Read conventionally returns what is available instead of waiting for more.

In your example, change PrintBytes to

func PrintBytes(reader io.Reader, blocksize int) {
block := make([]byte, blocksize)
for {
n, err := reader.Read(block)
//Process the data if n &gt; 0, even when err != nil
if n &gt; 0 {
log.Printf(&quot;%x\tblocksize=%d&quot;, block[:n], blocksize)
}
//Check for error
if err != nil {
if err != io.EOF {
log.Fatal(err)
} else if err == io.EOF {
break
}
} else if n == 0 {
//Considered as nothing happened
log.Printf(&quot;WARNING: read return 0,nil&quot;)
}
}
}

Update:

Correct usage of io.Reader, modify code to always process the data if n &gt; 0 even when error occurs.

huangapple
  • 本文由 发表于 2017年5月30日 21:36:31
  • 转载请务必保留本文链接:https://go.coder-hub.com/44263835.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定