Does it make sense to have an HTTP connection pool and a thread executor pool for requests that originate the outbound calls at the same time?

huangapple go评论64阅读模式
英文:

Does it make sense to have an HTTP connection pool and a thread executor pool for requests that originate the outbound calls at the same time?

问题

我正在处理一个使用固定执行器池进行出站调用请求的RESTful服务,类似于:

return CompletableFuture
    .supplyAsync(() -> {
      try {
        return restTemplate.exchange(uri, method, request, responseType, new Object[0]);
      } catch (IOException | URISyntaxException e) {
        e.printStackTrace();
      }
      return null;
    }, pool); // 这是一个具有固定大小的线程池执行器

这个restTemplate还通过PoolingHttpClientConnectionManager指定了一个固定的连接池(其中包含了很多我不会包括的样板代码)。

我将这解释为不必要地将出站调用从原始线程中卸载出来,这些原始线程被连接线程阻塞,同时并不执行任何其他任务。但是我未能说服作者,他坚持在涉及辅助任务(如日志记录等)时会带来性能提升(我认为这些应该是在此处卸载的任务)。

我在这里漏掉了什么?

英文:

I am working on a RESTful service that uses a fixed executor pool for outbound call requests like:

return CompletableFuture
    .supplyAsync(() -> {
      try {
        return restTemplate.exchange(uri, method, request, responseType, new Object[0]);
      } catch (IOException | URISyntaxException e) {
        e.printStackTrace();
      }
      return null;
    }, pool); //This is a thread pool executor with a fixed size
}

This restTemplate also has a fixed connection pool specified (with a lot of boilerplate I will not include) through a PoolingHttpClientConnectionManager.

I am interpreting this as unnecessarily offloading the outbound calls from the originating threads, which are blocked by the connection threads and do not perform any other tasks in the meantime. But I was unable to convince the author of this who insists on performance gains when auxiliary tasks like logging etc. are involved (which I think should be the tasks that are offloaded here)

What am I missing here?

答案1

得分: 1

在孤立的情况下,它似乎是多余的,但退后一步,它是有意义的。

主要原因是restTemplate.xx()是阻塞调用。如果你使用它们,你必须等待(http)响应以继续下一条指令。

发送并忘记

现在假设这个HTTP调用是一个“发送并忘记”的调用,你只是发送数据,而且你丢弃(或不需要)调用的响应以构建自己的响应。然后线程是一个更好的方法来更早地发送你的响应。

独立的工作

假设这个调用很重要,你需要它的响应,但与此同时你还有一些工作要做。也许你有一个需要获取的DB请求。也许是一个需要读取的文件。也许是另一个需要进行的HTTP调用,或者几乎任何不需要HTTP调用响应的事情。
你可以在你的线程等待HTTP响应的同时开始处理这些工作。
这是一个提升。

链接工作

假设你正在调用这个服务,并且根据这个调用的结果,你必须进行另外两个调用,其中一个需要执行第三个调用,而所有这些调用的结果都需要执行第四个调用。

CompletableFutures(或其他反应式编程工具)就是为此而设计的。任何可以并发运行的内容都会运行,并且依赖关系以声明的方式设置,隐藏了细节。它非常好,而且性能非常好。

配置冲突

Rest模板是基于某些HTTP客户端的,它池化了连接。连接池具有各种调整参数,例如总出站连接/套接字数量,每个主机的套接字数量,或者每个HTTP代理,或其他任何东西。

现在假设你即将要进行的HTTP调用,由于某种原因,受到了速率限制(外部API技术或合同限制),而HTTP池对此并不知情或无法配置。那么使用线程池作为控制并发请求数量的方法可能是最简单的方式。

英文:

In isolation it does seem redundant, but stepping back, it can make sense.

The main reason is that restTemplate.xx() are blocking calls. If you use them, you have to wait for the (http) response to proceed to the next instruction.

Fire and forget

Now suppose this HTTP call is a "fire and forget" call, you just send data out, and you discard (or do not need) the call's response to forge your own response. Then threading is a great way to send your response earlier.

Independant work to do

Suppose the call is important, you need its response, but you have some work to do in the meantime. Maybe you have a DB request to fetch. Maybe a file to read. Maybe another HTTP call to make, or just about anything that does not require the response of the HTTP call yet.
You can start working on that while your thread is waiting for the HTTP response.
That's a boost.

Chaining work

Suppose you are calling this one service, and with the result of this one call, you have to make two other calls, one of them need to perform a third call, and the result of all that is needed to perform a fourth call.

CompletableFutures (or other reactive programming tools) are designed for that. Anything that can run concurrently will, and dependencies are set in a declarative way that hides it. It's great, and as performant as can be.

Configuration conflicts

The rest template is based on some HTTP client, that pools connections. Connection pools have all kind of tuning parameters, e.g. total number of outbound connections/sockets, sockets per host, or per HTTP proxy, or what not.

Now suppose the HTTP call you're about to make, for some reason, is rate limited (external API technical or contractual restrictions), and the HTTP pool is not aware or configurable for that. Then using a thread pool as a controler for the number of concurrent requests can be the simplest way.

huangapple
  • 本文由 发表于 2020年10月2日 22:15:11
  • 转载请务必保留本文链接:https://go.coder-hub.com/64173082.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定