在S3文件上是否有并发读写限制?

huangapple go评论62阅读模式
英文:

Is there a concurrent read/write limit on a s3 file

问题

在AWS S3中,对于存储在单个文件中的并行读/写操作的数量有限制吗?我正在考虑设计一个解决方案,需要对存储在单个文件中的大量数据进行并行处理,这意味着在任何给定时间点上会有多个读/写操作。我想了解是否有此方面的限制。我不需要数据的一致性。

英文:

Is there a limit on the number of simultaneous/concurrent read/write operations on a single file stored in AWS S3? I'm thinking of designing a solution which requires parallel processing of some large amount of data stored in a single file which means there will be multiple read/write operations at any given point on time. I want to understand if there is a limit for this. Consistency of data is not required for me.

答案1

得分: 2

S3 不太适合您的需求。S3 是对象存储。这是一个过于简化的说法,但基本上意味着您处理的是整个文件。客户端不能直接进入 S3 并直接读写其中的内容。

当客户端在 S3 上“读取”文件时,实际上需要将该文件的副本检索到客户端设备的内存中,然后从那里进行读取。这样,它可以处理成千上万个并行读取操作(根据 此公告,最多可达 5,500 个请求)。

同样,写入 S3 意味着创建一个文件的新副本/版本。没有办法直接就地写入文件。因此,虽然 S3 通常可以支持大量并发写入操作,但多个客户端不能写入同一份文件的副本。

也许 EFS 可能符合您的需求,但我认为专为此类性能设计的数据库可能是更好的选择。

英文:

S3 doesn't sounds like the ideal service for your requirement. S3 is object storage. This is an oversimplification, but it basically means that you're dealing with the entire file. Clients don't go into S3 and read/write directly into it.

When a client "reads" a file on S3, it essentially has to retrieve a copy of that file into the memory of the client device and read it from there. This way, it can handle thousands of parallel reads (up to 5,500 requests according to this announcement).

Similarly, writing to S3 means creating a new copy/version of a file. There is no way to write to a file in-place. So while S3 can support a large number of concurrent writes in general, multiple clients can't write to the same copy of a file.

Maybe EFS might fit your requirement, but I think a database designed for this sort of performance would be a better option.

答案2

得分: 2

Reads: 多个并发读取请求是可以的。每个分区前缀的每秒 GET/HEAD 请求限制为 5,500 次。有关更多信息,请参阅请求限制

Writes: 对于对象的 PUT 和 DELETE 操作(每秒 3,500 次 RPS),采用强一致性读取-写入模型。最后一个写入操作将获胜,无锁定:

如果同时对相同的键进行两个 PUT 请求,则具有最新时间戳的请求将获胜。

文档中提供了多个并发应用示例。还请参阅最佳实践设计模式:优化 Amazon S3 性能

EFS 是一种用于并发访问用例的基于文件的存储选项。请查看文档中的比较 Amazon 云存储表格。

英文:

Reads: Multiple concurrent reads OK. The request limit is 5,500 GET/HEAD requests per second per partitioned prefix.

Writes: For object PUT and DELETE (3,500 RPS), strong read-after-write consistency. Last writer wins, no locking:

> If two PUT requests are simultaneously made to the same key, the request with the latest timestamp wins.

The docs have several concurrent application examples. See also Best practices design patterns: optimizing Amazon S3 performance.

EFS is a file-based storage option for concurrent access uses cases. See the Comparing Amazon Cloud Storage table from the docs.

huangapple
  • 本文由 发表于 2023年1月9日 00:18:54
  • 转载请务必保留本文链接:https://go.coder-hub.com/75049397.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定