英文:
How to force a BatchWriteItem failure
问题
I'm currently writing integration tests for my BatchWriteItem
logic using Spock/Groovy. I'm running a docker container which spins up a real DynamoDb table for this same purpose.
This is my logic in Java for BatchWriteItems
public Promise<Boolean> createItemsInBatch(ClientKey clientKey, String accountId, List<SrItems> srItems) {
List<Item> items = srItems.stream()
.map(srItem -> createItemFromSrItem(clientKey, createItemRef(srItem.getId(), accountId), srItem))
.collect(Collectors.toList());
List<List<Item>> batchItems = Lists.partition(items, 25);
var promises = batchItems.stream().map(itemsList -> Blocking.get(() -> {
TableWriteItems tableWriteItems = new TableWriteItems(table.getTableName());
tableWriteItems.withItemsToPut(itemsList);
BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(tableWriteItems);
return outcome.getUnprocessedItems().values().stream().flatMap(Collection::stream).collect(Collectors.toList());
})).collect(Collectors.toList());
return ParallelPromises.yieldAll(promises).map((List<? extends ExecResult<List<WriteRequest>>> results) -> {
if(results.isEmpty()) {
return true;
} else {
results.stream().map(Result::getValue).flatMap(Collection::stream).forEach(failure -> {
var failedItem = failure.getPutRequest().getItem();
logger.error(append("item", failedItem), "Failed to batch write item");
});
return false;
}
});
}
And this is my current implementation for the test (happy path)
@Unroll
def "createItemsInBatch - #description"(description, srItemsList, createResult) {
given:
def dynamoItemService = new DynamoItemService(realTable, amazonDynamoDBClient1) //passing the table running in the docker image + the dynamo client associated
when:
def promised = ExecHarness.yieldSingle {
dynamoItemService.createItemsInBatch(CLIENT_KEY, 'account-id', srItemsList as List<SrItem>)
}
then:
promised.success == createResult
where:
description | srItemsList | createResult
"single batch req not reaching batch size limit" | srItems(10) | true
"double batch req reaching batch size limit" | srItems(25) | true
"triple batch req reaching batch size limit" | srItems(51) | true
}
For context:
srItems()
is a function that just creates a bunch of different items to be injected in the service for the BatchWriteItem request
Want I want now is to be able to test the unhappy path of my logic, i.e. get some UnprocessedItems
from my outcome, testing the below code is actually doing its job
BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(tableWriteItems);
return outcome.getUnprocessedItems().values().stream().flatMap(Collection::stream).collect(Collectors.toList());
Any help would be greatly appreciated.
英文:
I'm currently writing integration tests for my BatchWriteItem
logic using Spock/Groovy. I'm running a docker container which spins up a real DynamoDb table for this same purpose.
This is my logic in Java for BatchWriteItems
public Promise<Boolean> createItemsInBatch(ClientKey clientKey, String accountId, List<SrItems> srItems) {
List<Item> items = srItems.stream()
.map(srItem -> createItemFromSrItem(clientKey, createItemRef(srItem.getId(), accountId), srItem))
.collect(Collectors.toList());
List<List<Item>> batchItems = Lists.partition(items, 25);
var promises = batchItems.stream().map(itemsList -> Blocking.get(() -> {
TableWriteItems tableWriteItems = new TableWriteItems(table.getTableName());
tableWriteItems.withItemsToPut(itemsList);
BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(tableWriteItems);
return outcome.getUnprocessedItems().values().stream().flatMap(Collection::stream).collect(Collectors.toList());
})).collect(Collectors.toList());
return ParallelPromises.yieldAll(promises).map((List<? extends ExecResult<List<WriteRequest>>> results) -> {
if(results.isEmpty()) {
return true;
} else {
results.stream().map(Result::getValue).flatMap(Collection::stream).forEach(failure -> {
var failedItem = failure.getPutRequest().getItem();
logger.error(append("item", failedItem), "Failed to batch write item");
});
return false;
}
});
}
And this is my current implementation for the test (happy path)
@Unroll
def "createItemsInBatch - #description"(description, srItemsList, createResult) {
given:
def dynamoItemService = new DynamoItemService(realTable, amazonDynamoDBClient1) //passing the table running in the docker image + the dynamo client associated
when:
def promised = ExecHarness.yieldSingle {
dynamoItemService.createItemsInBatch(CLIENT_KEY, 'account-id', srItemsList as List<SrItem>)
}
then:
promised.success == createResult
where:
description | srItemsList | createResult
"single batch req not reaching batch size limit" | srItems(10) | true
"double batch req reaching batch size limit" | srItems(25) | true
"triple batch req reaching batch size limit" | srItems(51) | true
}
For context:
srItems()
is a function that just creates a bunch of different items to be injected in the service for the BatchWriteItem request
Want I want now is to be able to test the unhappy path of my logic, i.e. get some UnprocessedItems
from my outcome, testing the below code is actually doing its job
BatchWriteItemOutcome outcome = dynamoDB.batchWriteItem(tableWriteItems);
return outcome.getUnprocessedItems().values().stream().flatMap(Collection::stream).collect(Collectors.toList());
Any help would be greatly appreciated
答案1
得分: 1
这实际上相当容易实现,我们可以强制限制你的DynamoDB表,这将导致UnprocessedItems
...
将表配置为1WCU并禁用自动扩展。现在以25个一组运行BatchWriteItem几秒钟,DynamoDB将开始限制请求,将在UnprocessedItems
响应中返回被限制的项目,从而测试不愉快的路径。
英文:
This is quite easy to do actually, we can force throttling on your DynamoDB table which will result in UnprocessedItems
....
Configure your table to have 1WCU and disable auto-scaling. Now run your BatchWriteItem in batches of 25 for a couple of seconds and DynamoDB will begin to throttle requests, which will return the throttled items in the UnprocessedItems
response, testing your unhappy path.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论