使用Go语言将文件流上传到AWS S3

huangapple go评论98阅读模式
英文:

Stream file upload to AWS S3 using go

问题

我想要直接将一个multipart/form-data(大型)文件上传到AWS S3,并且尽可能减少内存和文件磁盘的占用。我该如何实现这个目标?在线资源只解释了如何上传文件并将其存储在服务器本地。

英文:

I want to stream a multipart/form-data (large) file upload directly to AWS S3 with as little memory and file disk footprint as possible. How can I achieve this? Resources online only explain how to upload a file and store it locally on the server.

答案1

得分: 40

你可以使用上传管理器来流式传输和上传文件,你可以在源代码中阅读评论。你还可以配置参数来设置分块大小、并发性和最大上传部分,下面是一个参考示例代码。

package main

import (
	"fmt"
	"os"

	"github.com/aws/aws-sdk-go/aws/credentials"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3/s3manager"
)

var filename = "file_name.zip"
var myBucket = "myBucket"
var myKey = "file_name.zip"
var accessKey = ""
var accessSecret = ""

func main() {
	var awsConfig *aws.Config
	if accessKey == "" || accessSecret == "" {
		//加载默认凭证
		awsConfig = &aws.Config{
			Region: aws.String("us-west-2"),
		}
	} else {
		awsConfig = &aws.Config{
			Region:      aws.String("us-west-2"),
			Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""),
		}
	}

	// S3上传器将使用的会话
	sess := session.Must(session.NewSession(awsConfig))

	// 使用会话和默认选项创建上传器
	//uploader := s3manager.NewUploader(sess)

	// 使用会话和自定义选项创建上传器
	uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
		u.PartSize = 5 * 1024 * 1024 // 最小/默认允许的分块大小为5MB
		u.Concurrency = 2            // 默认为5
	})

	// 打开文件
	f, err := os.Open(filename)
	if err != nil {
		fmt.Printf("无法打开文件 %q,%v", filename, err)
		return
	}
	//defer f.Close()

	// 将文件上传到S3
	result, err := uploader.Upload(&s3manager.UploadInput{
		Bucket: aws.String(myBucket),
		Key:    aws.String(myKey),
		Body:   f,
	})

	// 如果上传失败
	if err != nil {
		fmt.Printf("文件上传失败,%v", err)
		return
	}
	fmt.Printf("文件上传至 %s\n", result.Location)
}
英文:

You can use upload manager to stream the file and upload it, you can read comments in source code
you can also configure params to set the part size, concurrency & max upload parts, below is a sample code for reference.

package main
import (
"fmt"
"os"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3/s3manager"
)
var filename = "file_name.zip"
var myBucket = "myBucket"
var myKey = "file_name.zip"
var accessKey = ""
var accessSecret = ""
func main() {
var awsConfig *aws.Config
if accessKey == "" || accessSecret == "" {
//load default credentials
awsConfig = &aws.Config{
Region: aws.String("us-west-2"),
}
} else {
awsConfig = &aws.Config{
Region:      aws.String("us-west-2"),
Credentials: credentials.NewStaticCredentials(accessKey, accessSecret, ""),
}
}
// The session the S3 Uploader will use
sess := session.Must(session.NewSession(awsConfig))
// Create an uploader with the session and default options
//uploader := s3manager.NewUploader(sess)
// Create an uploader with the session and custom options
uploader := s3manager.NewUploader(sess, func(u *s3manager.Uploader) {
u.PartSize = 5 * 1024 * 1024 // The minimum/default allowed part size is 5MB
u.Concurrency = 2            // default is 5
})
//open the file
f, err := os.Open(filename)
if err != nil {
fmt.Printf("failed to open file %q, %v", filename, err)
return
}
//defer f.Close()
// Upload the file to S3.
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(myBucket),
Key:    aws.String(myKey),
Body:   f,
})
//in case it fails to upload
if err != nil {
fmt.Printf("failed to upload file, %v", err)
return
}
fmt.Printf("file uploaded to, %s\n", result.Location)
}

答案2

得分: 10

你可以使用minio-go来实现这个功能:

n, err := s3Client.PutObject("bucket-name", "objectName", object, size, "application/octet-stream")

PutObject()会自动进行内部的分块上传。示例代码

英文:

you can do this using minio-go :

n, err := s3Client.PutObject("bucket-name", "objectName", object, size, "application/octet-stream")

PutObject() automatically does multipart upload internally. Example

答案3

得分: 1

另一个选择是使用goofys挂载S3存储桶,然后将写入流式传输到挂载点。goofys不会在本地缓冲内容,因此可以很好地处理大文件。

英文:

Another option is to mount the S3 bucket with goofys and then stream your writes to the mountpoint. goofys does not buffer the content locally so it will work fine with large files.

答案4

得分: 1

尝试使用aws-sdk v2包来完成这个任务,所以必须稍微修改一下@maaz的代码。我将其留在这里供其他人使用:

type TokenMeta struct {
	AccessToken  string 
	SecretToken  string 
	SessionToken string 
	BucketName   string
}

// 使用token meta创建S3Client结构体,并将其作为该方法的接收者
func (s3Client S3Client) StreamUpload(fileToUpload string, fileKey string) error {
	accessKey := s3Client.TokenMeta.AccessToken
	secretKey := s3Client.TokenMeta.SecretToken

	awsConfig, err := config.LoadDefaultConfig(context.TODO(),
		config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKey, secretKey, s3Client.TokenMeta.SessionToken)),
	)
	if err != nil {
		return fmt.Errorf("创建aws配置时出错:%v", err)
	}

	client := s3.NewFromConfig(awsConfig)
	uploader := manager.NewUploader(client, func(u *manager.Uploader) {
		u.PartSize = 5 * 1024 * 1024
		u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(10 * 1024 * 1024)
	})

	f, err := os.Open(fileToUpload)
	if err != nil {
		return fmt.Errorf("打开fileToUpload文件失败:%q,%v", fileToUpload, err)
	}
	defer func(f *os.File) {
		err := f.Close()
		if err != nil {
			fmt.Errorf("关闭fileToUpload文件时出错:%v", err)
		}
	}(f)

	inputObj := &s3.PutObjectInput{
		Bucket: aws.String(s3Client.TokenMeta.BucketName),
		Key:    aws.String(fileKey),
		Body:   f,
	}
	uploadResult, err := uploader.Upload(context.TODO(), inputObj)
	if err != nil {
		return fmt.Errorf("上传fileToUpload文件失败:%v", err)
	}

	fmt.Printf("%s上传到%s\n", fileToUpload, uploadResult.Location)
	return nil
}
英文:

Was trying to do this with the aws-sdk v2 package so had to change the code of @maaz a bit. Am leaving it here for others -


type TokenMeta struct {
AccessToken  string 
SecretToken  string 
SessionToken string 
BucketName   string
}
// Create S3Client struct with the token meta and use it as a receiver for this method
func (s3Client S3Client) StreamUpload(fileToUpload string, fileKey string) error {
accessKey := s3Client.TokenMeta.AccessToken
secretKey := s3Client.TokenMeta.SecretToken
awsConfig, err := config.LoadDefaultConfig(context.TODO(),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKey, secretKey, s3Client.TokenMeta.SessionToken)),
)
if err != nil {
return fmt.Errorf("error creating aws config: %v", err)
}
client := s3.NewFromConfig(awsConfig)
uploader := manager.NewUploader(client, func(u *manager.Uploader) {
u.PartSize = 5 * 1024 * 1024
u.BufferProvider = manager.NewBufferedReadSeekerWriteToPool(10 * 1024 * 1024)
})
f, err := os.Open(fileToUpload)
if err != nil {
return fmt.Errorf("failed to open fileToUpload %q, %v", fileToUpload, err)
}
defer func(f *os.File) {
err := f.Close()
if err != nil {
fmt.Errorf("error closing fileToUpload: %v", err)
}
}(f)
inputObj := &s3.PutObjectInput{
Bucket: aws.String(s3Client.TokenMeta.BucketName),
Key:    aws.String(fileKey),
Body:   f,
}
uploadResult, err := uploader.Upload(context.TODO(), inputObj)
if err != nil {
return fmt.Errorf("failed to uploadResult fileToUpload, %v", err)
}
fmt.Printf("%s uploaded to, %s\n", fileToUpload, uploadResult.Location)
return nil
}

答案5

得分: -2

我没有尝试过,但如果我是你,我会尝试使用多部分上传选项。

你可以阅读文档multipartupload

这里有一个关于多部分上传和多部分上传中止的Go示例。

英文:

I didn't try it but if i were you id try the multi part upload option .

you can read the doc multipartupload .

here is go example for multipart upload and multipart upload abort.

huangapple
  • 本文由 发表于 2015年12月9日 19:01:52
  • 转载请务必保留本文链接:https://go.coder-hub.com/34177137.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定