AWS API Gateway 和 WAF 的限制?

huangapple go评论45阅读模式
英文:

AWS API Gateway and WAF limitation?

问题

  1. 所有的 REST API 和文件上传必须通过集中式 API 网关和 Web 应用防火墙 (WAF) 进行处理。
  2. 上传文件的大小限制(Excel、PDF 文档)远远超过了 API 网关(10MB)和 WAF(8KB)的大小限制。

我不知道这是否在其他地方很常见。

但我认为在这种情况下,API 网关和防火墙似乎并不适用,因为它们有硬性限制,迫使我不使用这些 AWS 服务。请给予建议。

英文:

I am in a software development house, where we usually serve small to medium company.
As we move forward to bigger and bigger company where it is heavy compliance, we notice that there is a trend in requirement where this is part of the requirement.

  1. All REST API and file upload must pass through centralised API gateway and WAF.
  2. The upload file size limit (excel, pdf document) is well above the API Gateway (10mb) and WAF 8kb size limitation

I don't know if this is something common out there.

But I don't see that API gateway and firewall is useful on this scenario as there is a hard limitation on it that I am force to not using these AWS services.

Kindly advice.

答案1

得分: 2

> I know that I can turn off the 8kb request, but would it expose to security issue?
我知道我可以关闭8kb的请求,但这会暴露安全问题吗?

Yes. WAF won't prevent the security vulnerabilities it is designed to prevent when requests are larger than 8kb and the compromising part of the request is not in the first 8kb. I believe it is possible to circumvent WAF by "padding" the front of a request. For example, "padding" the front of a SQL injection attack (which would normally be blocked by WAF) with 8kb of no-op whitespace.
是的。当请求超过8kb并且请求中的威胁部分不在前8kb内时,WAF将无法阻止它旨在预防的安全漏洞。我相信可以通过在请求的前面“填充”来规避WAF。例如,使用8kb的无操作空格在SQL注入攻击的前面“填充”(通常会被WAF阻止)。

> Also, yes I would like to know if it's possible to divide a file upload into several parts, each of which are 10mb or less so they fit through API Gateway and at the same time did not go more than 30s limitation?
另外,是的,我想知道是否有可能将文件上传分为若干部分,每个部分都小于10MB,以便它们适应API Gateway,同时不超过30秒的限制?

You could, hypothetically, split arbitrary binary files into 9.9 MB chunks and send those chunks in separate HTTP requests over the network to something like a Lambda which is behind API Gateway and have Lambda deposit these chunks into an S3 bucket. Whenever you deposit a "chunk," the Lambda could check if you have received and stored all of the expected "chunks" in S3 already. If you have received all of the expected "chunks," then the Lambda could concatenate the chunks and put them into whatever file storage (presumably a separate S3 bucket) you want to use, and then delete the "chunks" in the original S3 bucket.

Seems pretty stupid to go to all of that trouble, but it's possible. Here is an AWS recommended way to upload files to S3 without going through API Gateway: link
从理论上讲,你可以将任意二进制文件分成9.9MB的块,然后将这些块作为单独的HTTP请求发送到网络上的Lambda等服务,该Lambda位于API Gateway之后,然后Lambda可以将这些块存储到S3存储桶中。每当你存储一个“块”时,Lambda可以检查是否已经在S3中接收并存储了所有预期的“块”。如果你已经收到了所有预期的“块”,Lambda可以将这些块连接起来,并将它们放入你想使用的任何文件存储中(可能是一个单独的S3存储桶),然后删除原始S3存储桶中的“块”。

看起来这样做很麻烦,但是是可能的。这里是AWS推荐的一种上传文件到S3而不通过API Gateway的方法:链接

英文:

Some questions / answers from your comment. These are my own opinions and based on my own experience.

> I know that I can turn off the 8kb request, but would it expose to security issue?

Yes. WAF won't prevent the security vulnerabilities it is designed to prevent when requests are larger than 8kb and the compromising part of the request is not in the first 8kb. I believe it is possible to circumvent WAF by "padding" the front of a request. For example, "padding" the front of a SQL injection attack (which would normally be blocked by WAF) with 8kb of no-op whitespace.

> Also, yes I would like to know if it's possible to divide a file upload into several parts, each of which are 10mb or less so they fit through API Gateway and at the same time did not go more than 30s limitation?

You could, hypothetically, split arbitrary binary files into 9.9 MB chunks and send those chunks in separate HTTP requests over the network to something like a Lambda which is behind API Gateway and have Lambda deposit these chunks into a S3 bucket. Whenever you deposit a "chunk" the Lambda could check if you have received and stored all of the expected "chunks" in S3 already. If you have received all of the expected "chunks" then the Lambda could concatenate the chunks and put them into whatever file storage (presumably a separate S3 bucket) you want to use, and then delete the "chunks" in the original S3 bucket.

Seems pretty stupid to go to all of that trouble but it's possible. Here is an AWS recommended way to upload files to S3 without going through API Gateway: https://aws.amazon.com/blogs/storage/allowing-external-users-to-securely-and-directly-upload-files-to-amazon-s3/

huangapple
  • 本文由 发表于 2023年2月23日 23:39:24
  • 转载请务必保留本文链接:https://go.coder-hub.com/75547058.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定