英文:
Golang upload Http request FormFile to Amazon S3
问题
我正在创建一个微服务来处理一些附件上传到Amazon S3,我想要实现的目标是接受一个文件,然后直接将其存储到我的Amazon S3存储桶中。我的当前函数如下:
func upload_handler(w http.ResponseWriter, r *http.Request) {
file, header, err := r.FormFile("attachment")
if err != nil {
fmt.Fprintln(w, err)
return
}
defer file.Close()
fileSize, err := file.Seek(0, 2) //2 = from end
if err != nil {
panic(err)
}
fmt.Println("File size : ", fileSize)
bytes := make([]byte, fileSize)
// read into buffer
buffer := bufio.NewReader(file)
_, err = buffer.Read(bytes)
auth := aws.Auth{
AccessKey: "XXXXXXXXXXX",
SecretKey: "SECRET_KEY_HERE",
}
client := s3.New(auth, aws.EUWest)
bucket := client.Bucket("attachments")
err = bucket.Put(header.Filename, bytes, header.Header.Get("Content-Type"), s3.ACL("public-read"))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
问题在于存储在S3中的文件都是损坏的。经过简单验证后,似乎文件的有效载荷没有被读取为字节。
如何将文件转换为字节并正确存储到S3中?
英文:
I'm creating a micro service to handle some attachments uploads to Amazon S3, What I'm trying to achieve is accept a file and then store it directly to my Amazon S3 bucket, my current function :
func upload_handler(w http.ResponseWriter, r *http.Request) {
file, header, err := r.FormFile("attachment")
if err != nil {
fmt.Fprintln(w, err)
return
}
defer file.Close()
fileSize, err := file.Seek(0, 2) //2 = from end
if err != nil {
panic(err)
}
fmt.Println("File size : ", fileSize)
bytes := make([]byte, fileSize)
// read into buffer
buffer := bufio.NewReader(file)
_, err = buffer.Read(bytes)
auth := aws.Auth{
AccessKey: "XXXXXXXXXXX",
SecretKey: "SECRET_KEY_HERE",
}
client := s3.New(auth, aws.EUWest)
bucket := client.Bucket("attachments")
err = bucket.Put(header.Filename, bytes, header.Header.Get("Content-Type"), s3.ACL("public-read"))
if err != nil {
fmt.Println(err)
os.Exit(1)
}
}
The problem is that the files stored in S3 are all corrupted, After a small verification it seems that the file payload is not read as bytes
How to convert the file to bytes and store it correctly to S3 ?
答案1
得分: 7
使用ioutil.ReadAll
:
bs, err := ioutil.ReadAll(file)
// ...
err = bucket.Put(
header.Filename,
bs,
header.Header.Get("Content-Type"),
s3.ACL("public-read"),
)
Read
是一个具有微妙行为的低级函数:
Read将数据读入p中。它返回读入p的字节数。它在底层Reader上最多调用一次Read,因此n可能小于len(p)。在EOF时,计数将为零,err将为io.EOF。
所以可能发生的情况是,文件数据的某个子集与一堆0一起被写入到S3中。
ioutil.ReadAll
通过反复调用Read
来填充一个动态扩展的缓冲区,直到达到文件的末尾。(因此也不需要bufio.Reader
)
此外,Put
函数在处理大文件时会出现问题(使用ReadAll
意味着整个文件必须适应内存),所以您可能希望改用PutReader
:
bucket.PutReader(
header.Filename,
file,
fileSize,
header.Header.Get("Content-Type"),
s3.ACL("public-read"),
)
英文:
Use ioutil.ReadAll
:
bs, err := ioutil.ReadAll(file)
// ...
err = bucket.Put(
header.Filename,
bs,
header.Header.Get("Content-Type"),
s3.ACL("public-read"),
)
Read
is a lower-level function which has subtle behavior:
> Read reads data into p. It returns the number of bytes read into p. It calls Read at most once on the underlying Reader, hence n may be less than len(p). At EOF, the count will be zero and err will be io.EOF.
So what was probably happening was some subset of the file data was being written to S3 along with a bunch of 0s.
ioutil.ReadAll
works by calling Read
over and over again filling a dynamically expanding buffer until it reaches the end of the file. (so there's no need for the bufio.Reader
either)
Also the Put
function will have issues with large files (using ReadAll
means the entire file must fit in memory) so you may want to use PutReader
instead:
bucket.PutReader(
header.Filename,
file,
fileSize,
header.Header.Get("Content-Type"),
s3.ACL("public-read"),
)
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论