NoSQL Workbench提交到DynamoDB不完整

huangapple go评论57阅读模式
英文:

NoSQL Workbench commit to DynamoDB incomplete

问题

I use NoSQL Workbench to design (single-table) the PK SK and Non-Key Attributes by creating the test data in VS Code as JSON, then import into the workbench. I repeat this over the course of weeks as I build up the key-designs. Once I like how it behaves I create the table in DynamoDB and start coding queries. Twice in the last few years I run into a situation where after a while the Workbench commit to DynamoDB fails to read the entire test data set. I'm sure the test data is valid JSON. The workbench pops a dialog that states how many items were not committed. If I delete the new table and re-commit, sometimes the number of items not committed changes. The DynamoDB cloud-watch log is not helpful. Next time it happens, I'll revisit this post and add the log message.

This is what I do when that happens. I use the Workbench to create a new model, export the JSON, then I snip the "NonKeyAttributes" and "TableData" table data sections and add them to the new JSON model and then import, and the commit to DynamoDB takes all the test data.

Since these are single-table designs, the PK and SK are highly overloaded, but I don't see why that would cause the metadata to get corrupted; i.e., hence the need for a new model.

If anyone can shed some light on what might be happening, I'd be grateful or an easier way to fix the problem.

英文:

I use NoSQL Workbench to design (single-table) the PK SK and Non-Key Attributes by creating the test data in VS Code as JSON, then import into the workbench. I repeat this over the course of weeks as I build up the key-designs. Once I like how it behaves I create the table in DynamoDB and start coding queries. Twice in the last few years I run into a situation where after awhile the Workbench commit to DynamoDB fails to read the entire test data set. I'm sure the test data is valid JSON. The workbench pop a dialog that states how many items were not committed. If I delete the new table and re-commit sometimes the number of items not committed changes. The DynamoDB cloud-watch log is not helpful. Next time it happens, I'll revisit this post and add the log message.

This is what I do when that happens. I use the Workbench to create a new model, export the JSON, then I snip the "NonKeyAttributes" and "TableData" table data sections and add them to the new JSON model and then import and the commit to DynamoDB takes all the test data.

Since these are single-table designs, the PK and SK are highly overloaded, but I don't see why that would cause the meta data to get corrupted; i.e., hence the need for a new model.

If anyone can shed some light on what might be happening I'd be grateful or an easier way to fix the problem.

答案1

得分: 1

这可能是因为NoSQLWorkbench定义表格具有1WCU,导致在尝试迁移大量数据时出现节流,导致一些项目被丢弃。团队已经意识到这个边缘情况,将在即将发布的更新中解决此问题。

英文:

I believe this happens as NoSQLWorkbench defines the table to have 1WCU which causes throttling when trying to migrate large amounts of data, resulting in some items being dropped.

The team are aware of this edge case and it should be updated in an upcoming release to resolve this issue.

huangapple
  • 本文由 发表于 2023年4月11日 00:07:29
  • 转载请务必保留本文链接:https://go.coder-hub.com/75978707.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定