Spring Webflux和Amazon SDK 2.x:S3AsyncClient超时

huangapple go评论59阅读模式
英文:

Spring Webflux and Amazon SDK 2.x: S3AsyncClient timeout

问题

我正在使用Spring Boot 2.3.1、Webflux、带有响应式MongoDB驱动的Spring Data和Amazon SDK 2.14.6实现一个响应式项目。

我有一个在MongoDB上持久化实体并且必须上传文件到S3的CRUD操作。我正在使用SDK的响应式方法`s3AsyncClient.putObject`,但我遇到了一些问题。_CompletableFuture_抛出了以下异常:

```plaintext
java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.ApiCallTimeoutException: 在指定的超时配置(60000毫秒)之前,客户端执行未完成
	at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) ~[na:na]
	Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: 
从生产者[reactor.core.publisher.MonoMapFuseable]的装配跟踪:
	reactor.core.publisher.Mono.map(Mono.java:3054)
	br.com.wareline.waredrive.service.S3Service.uploadFile(S3Service.java:94)

我试图上传的文件大小约为34KB,是一个简单的文本文件。

上传方法位于我的S3Service.java类中,该类在_DocumentoService.java_中进行了自动装配。

@Component
public class S3Service {

    @Autowired
    private final ConfiguracaoService configuracaoService;

    public Mono<PutObjectResponse> uploadFile(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final String cliente) {
        return configuracaoService.findByClienteId(cliente)
                .switchIfEmpty(Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND, String.format("Configuração com id %s não encontrada", cliente))))
                .map(configuracao -> uploadFileToS3(headers, body, fileKey, configuracao))
                .doOnSuccess(response -> {
                    checkResult(response);
                });
    }

    private PutObjectResponse uploadFileToS3(final HttpHeaders headers, final Flux<ByteBuffer> body, final String fileKey, final Configuracao configuracao) {

        final long length = headers.getContentLength();
        if (length < 0) {
            throw new UploadFailedException(HttpStatus.BAD_REQUEST.value(), Optional.of("required header missing: Content-Length"));
        }
        final Map<String, String> metadata = new HashMap<>();
        final MediaType mediaType = headers.getContentType() != null ? headers.getContentType() : MediaType.APPLICATION_OCTET_STREAM;

        final S3AsyncClient s3AsyncClient = getS3AsyncClient(configuracao);

        return s3AsyncClient.putObject(
                PutObjectRequest.builder()
                        .bucket(configuracao.getBucket())
                        .contentLength(length)
                        .key(fileKey)
                        .contentType(mediaType)
                        .metadata(metadata)
                        .build(),
                AsyncRequestBody.fromPublisher(body))
                .whenComplete((resp, err) -> s3AsyncClient.close())
                .join();
    }

    public S3AsyncClient getS3AsyncClient(final Configuracao s3Props) {

        final SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
            .readTimeout(Duration.ofMinutes(1))
            .writeTimeout(Duration.ofMinutes(1))
            .connectionTimeout(Duration.ofMinutes(1))
            .maxConcurrency(64)
            .build();

        final S3Configuration serviceConfiguration = S3Configuration.builder().checksumValidationEnabled(false).chunkedEncodingEnabled(true).build();

        return S3AsyncClient.builder()
            .httpClient(httpClient)
            .region(Region.of(s3Props.getRegion()))
            .credentialsProvider(() -> AwsBasicCredentials.create(s3Props.getAccessKey(), s3Props.getSecretKey()))
            .serviceConfiguration(serviceConfiguration)
            .overrideConfiguration(builder -> builder.apiCallTimeout(Duration.ofMinutes(1)).apiCallAttemptTimeout(Duration.ofMinutes(1)))
            .build();

    }

我基于亚马逊SDK文档和https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/s3/src/main/java/com/example/s3/S3AsyncOps.java中的代码示例来实现。

我无法弄清楚异步客户端超时问题的原因。奇怪的是,当我使用相同的_S3AsyncClient_从存储桶下载文件时,它可以工作。我尝试将_S3AsyncClient_中的超时增加到约5分钟,但没有成功。我不知道我做错了什么。


<details>
<summary>英文:</summary>
I&#39;m implementing a Reactive project with Spring boot 2.3.1, Webflux, Spring Data with reactive mongodb driver and Amazon SDk 2.14.6.
I have a CRUD that persist an entity on MongoDB and must upload a file to S3. I&#39;m using the SDK reactive method `s3AsyncClient.putObject` and I facing some issues. The _CompletableFuture_ throws the following exception: 

java.util.concurrent.CompletionException: software.amazon.awssdk.core.exception.ApiCallTimeoutException: Client execution did not complete before the specified timeout configuration: 60000 millis
at java.base/java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:314) ~[na:na]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Assembly trace from producer [reactor.core.publisher.MonoMapFuseable] :
reactor.core.publisher.Mono.map(Mono.java:3054)
br.com.wareline.waredrive.service.S3Service.uploadFile(S3Service.java:94)

The file that I trying to upload have about 34kb, It is a simple text file.
The upload method is in my `S3Service.java` class which is autowired at _DocumentoService.java_
```java
@Component
public class S3Service {
@Autowired
private final ConfiguracaoService configuracaoService;
public Mono&lt;PutObjectResponse&gt; uploadFile(final HttpHeaders headers, final Flux&lt;ByteBuffer&gt; body, final String fileKey, final String cliente) {
return configuracaoService.findByClienteId(cliente)
.switchIfEmpty(Mono.error(new ResponseStatusException(HttpStatus.NOT_FOUND, String.format(&quot;Configura&#231;&#227;o com id %s n&#227;o encontrada&quot;, cliente))))
.map(configuracao -&gt; uploadFileToS3(headers, body, fileKey, configuracao))
.doOnSuccess(response -&gt; {
checkResult(response);
});
}
private PutObjectResponse uploadFileToS3(final HttpHeaders headers, final Flux&lt;ByteBuffer&gt; body, final String fileKey, final Configuracao configuracao) {
final long length = headers.getContentLength();
if (length &lt; 0) {
throw new UploadFailedException(HttpStatus.BAD_REQUEST.value(), Optional.of(&quot;required header missing: Content-Length&quot;));
}
final Map&lt;String, String&gt; metadata = new HashMap&lt;&gt;();
final MediaType mediaType = headers.getContentType() != null ? headers.getContentType() : MediaType.APPLICATION_OCTET_STREAM;
final S3AsyncClient s3AsyncClient = getS3AsyncClient(configuracao);
return s3AsyncClient.putObject(
PutObjectRequest.builder()
.bucket(configuracao.getBucket())
.contentLength(length)
.key(fileKey)
.contentType(mediaType)
.metadata(metadata)
.build(),
AsyncRequestBody.fromPublisher(body))
.whenComplete((resp, err) -&gt; s3AsyncClient.close())
.join();
}
public S3AsyncClient getS3AsyncClient(final Configuracao s3Props) {
final SdkAsyncHttpClient httpClient = NettyNioAsyncHttpClient.builder()
.readTimeout(Duration.ofMinutes(1))
.writeTimeout(Duration.ofMinutes(1))
.connectionTimeout(Duration.ofMinutes(1))
.maxConcurrency(64)
.build();
final S3Configuration serviceConfiguration = S3Configuration.builder().checksumValidationEnabled(false).chunkedEncodingEnabled(true).build();
return S3AsyncClient.builder()
.httpClient(httpClient)
.region(Region.of(s3Props.getRegion()))
.credentialsProvider(() -&gt; AwsBasicCredentials.create(s3Props.getAccessKey(), s3Props.getSecretKey()))
.serviceConfiguration(serviceConfiguration)
.overrideConfiguration(builder -&gt; builder.apiCallTimeout(Duration.ofMinutes(1)).apiCallAttemptTimeout(Duration.ofMinutes(1)))
.build();
}

I based my implementation in Amazon SDK documentation and the code examples at https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/s3/src/main/java/com/example/s3/S3AsyncOps.java

I can't figured out what is the cause of the async client timeout problem. The weird thing is that when I use the same S3AsyncClient, to download files from bucket, it works. I tried to increase the timeout in S3AsyncClient to about 5 min without success. I don't know what I'm doing wrong.

答案1

得分: 1

我找到了错误。
当我在 PutObjectRequest.builder().contentLength(length) 中定义 contentLength 时,我使用了 headers.getContentLength(),这是整个请求的大小。在我的请求中,其他信息一起传递,使得内容长度大于实际文件长度。

我在亚马逊文档中找到了这个信息:

> 在“Content-Length”头中设置的字节数多于实际文件大小
>
> 当您向亚马逊S3发送HTTP请求时,亚马逊S3希望接收Content-Length头中指定的数据量。如果亚马逊S3未收到预期数据量,并且连接闲置了20秒或更长时间,则连接将关闭。请务必核实您发送到亚马逊S3的实际文件大小与Content-Length头中指定的文件大小相符。

https://aws.amazon.com/pt/premiumsupport/knowledge-center/s3-socket-connection-timeout-error/

超时错误发生是因为S3会等待直到发送的内容长度达到客户端中指定的大小,文件在达到指定的内容长度之前就已经传输完毕。然后连接保持闲置状态,S3关闭了连接。

我将内容长度更改为实际文件大小,上传成功了。

英文:

I found the error.
When I am defining the contentLength in PutObjectRequest.builder().contentLength(length) I use the headers.getContentLength() which is the size of whole request. In my request other informations is passed together, making the content length being greater than the real file length.

I found this in Amazon documentation:

> The number of bytes set in the "Content-Length" header is more than
> the actual file size
>
> When you send an HTTP request to Amazon S3, Amazon S3 expects to
> receive the amount of data specified in the Content-Length header. If
> the expected amount of data isn't received by Amazon S3, and the
> connection is idle for 20 seconds or longer, then the connection is
> closed. Be sure to verify that the actual file size that you're
> sending to Amazon S3 aligns with the file size that is specified in
> the Content-Length header.

https://aws.amazon.com/pt/premiumsupport/knowledge-center/s3-socket-connection-timeout-error/

The timeout error occurred because S3 waits until the content length sended reach the size informed in client, the file ends be transmited before to reach the content length informed. Then the connection stays idle and S3 closes the socket.

I change the content length to the real file size and the upload was successful.

huangapple
  • 本文由 发表于 2020年9月2日 09:24:26
  • 转载请务必保留本文链接:https://go.coder-hub.com/63697436.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定