英文:
Limiting the S3 PUT file size using pre-signed URLs
问题
我正在生成S3预签名URL,以便客户(移动应用程序)可以直接将图像上传到S3,而无需通过服务进行。对于我的用例,预签名URL的过期时间需要配置为较长的时间窗口(10-20分钟)。因此,我想限制上传到S3的文件大小,以便任何恶意攻击者无法将大文件上传到S3存储桶。客户端将从具有对S3存储桶访问权限的服务中获取URL。我正在使用AWS Java SDK。
我发现可以使用POST表单进行浏览器上传,但如何仅使用已签名的S3 URL进行PUT上传呢?
英文:
I am generating S3 pre signed URLs so that the client(mobile app) can PUT an image directly to S3 instead of going through a service. For my use case the expiry time of the pre signed URL needs to be configured for a longer window (10-20 mins). Therefore, I want to limit the size of file upload to S3 so that any malicious attacker can not upload large files to the S3 bucket. The client will get the URL from a service which has access to the S3 bucket. I am using AWS Java SDK.
I found that this can be done using POST forms for browser uploads but how can I do this using just signed S3 URL PUT?
答案1
得分: 1
以下是翻译的内容:
我第一次使用 S3 签名 URL,对此也感到担忧。
我认为这整个签名 URL 的东西有点麻烦,因为你无法在它们上面设置最大的对象/上传大小限制。
我认为这在文件上传中非常重要,但却缺失了。
由于没有这个选项,你被迫使用过期时间等方式来处理这个问题。这变得非常混乱。
但似乎你也可以使用带有常规 POST 请求的 S3 存储桶,它在其策略中有一个 content-length 参数。
所以我可能会在将来用 POST 路由来替代我的签名 URL。
我认为对于大规模的应用程序来说,这是正确的做法。(?)
可能对你的问题有所帮助的是:
在 JavaScript SDK 中,有一个方法/函数可以获取 S3 对象的元数据(包括文件大小),而不必下载整个文件。
它被称为 s3.headObject()。
我认为,在上传完成后,AWS 需要一些时间来处理新上传的文件,然后才能在存储桶中使用。
我所做的是,在每次上传后设置一个计时器来检查文件大小,如果超过1MB,就会删除该文件。
我认为在生产环境中,你会想要将这些信息记录在数据库中。
我的文件名还包括上传文件的用户 ID。
这样,如果需要,你可以在上传过大时封锁某个账户。
对我来说,这在 JavaScript 中有效。
function checkS3(key) {
return new Promise((resolve, reject) => {
s3.headObject(headParams, (err, metadata) => {
console.log("setTimeout upload Url");
if (err && ["NotFound", "Forbidden"].indexOf(err.code) > -1) {
return reject(err);
} else if (err) {
const e = Object.assign({}, Errors.SOMETHING_WRONG, { err });
return reject(e);
}
return resolve(metadata);
});
});
}
英文:
I was using S3-Signed-URLS the first time and was also concerned about this.
And I think this whole signed Urls stuff is a bit of a pain because you cant put a maximum Object/Upload size limit on them.
I think thats something very important on file-uploads in general, that is just missing..
By not having this option you are forced to handle that problem with the expiry time etc. This gets really messy..
But it seems that you can use S3 Buckets also with normal Post-Requests, what has a content-length parameter in their policy.
So I'll probably exchange my Signed-URLS with POST-Routes in the future.
I think for proper, larger applications this is the way to go.(?)
What might help with your issue:
In the JavaScript SDK there is a method / function that gets you only the meta-data of the an S3-Object (including File Size) without downloading the whole file.
It's called s3.headObject()
I think, after the upload is done, it takes some time for AWS to process that newly uploaded file and then is available in your bucket.
What I did was, I set a timer after each upload to check the file-size and if its bigger 1mb, it will delete the file.
I think for production you wanna log that somewhere in a DB.
My FileNames also include the user-id of who uploaded the file.
That way, you can block an account after a too big upload if you wanted.
This here worked for me in javascript..
function checkS3(key) {
return new Promise((resolve, reject) => {
s3.headObject(headParams, (err, metadata) => {
console.log("setTimeout upload Url");
if (err && ["NotFound", "Forbidden"].indexOf(err.code) > -1) {
// console.log(err);
return reject(err);
//return resolve();
} else if (err) {
const e = Object.assign({}, Errors.SOMETHING_WRONG, { err });
// console.log(e);
// console.log(err);
return reject(e);
}
return resolve(metadata);
});
});
}
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论